I got it converted over to an oF project, but am interested in extending it to support alpha transparency. Current the shader is using the alpha channel to store a “depth” value.
after this is applied, a DoF blurred version of the scene is then created using the blur value stored in that and then composited with the original scene like this:
What I want to do is figure out how to change this pipeline to allow for alpha. I was thinking that perhaps I would need to use a separate texture instead of using the alpha value, or perhaps rendering the scene twice - once with alpha, once without.
I think if were given a just a nudge in the right direction I could figure it out, but am a bit lost on how to approach the problem.
Let me know if there is more relevant information that is not included and I can try to explain further.
I’ve never done anything like it before, but I’m pretty sure what you’ll want to look into is MRT (multiple render targets). I think you’ll want to look into glDrawBuffers and you’ll wind up with a frag shader that writes the color/alpha to the main buffer and a depth value to a separate buffer (in the same drawing pass). Again, no experience with MRT, but I imagine you could even set up the depth buffer to be a single channel floating point buffer or something which would be a lot better than an 8-bit alpha channel.
if you render to an FBO, you can attach a depth texture instead of a depth buffer.- Then you can simply bind it in the next pass and read the depth values back.
Anyways, the DOF effect wont properly work with transparent objects because the depth buffer only saves one depth value for each pixel. If you had transparent objects, you would need multiple depth values per fragment and still would not know wich to use afterwards. So generally if you have to read back depth from a texture or whatsoever transparency is a bad idea since you can only save one depth value per pixel.
I got to a workable solution although it’s not ideal.
it uses two rendering passes like the article suggests. Moka was right in that it could potentially use the depth texture to the same affect, but right now this seemed simpler than changing the way the MSAA FOB’s are working.
hi.
did you every get a chance to update this example to OF 007 ?
I am trying to make it use ofFbo and ofShader but get a bunch of errors when running.
thanks.
errors:
allocating FBO 0
OF: OF_LOG_ERROR: ofGetGlFormatAndType(): glInternalFormat not recognized returning glFormat as glInternalFormat
OF: OF_LOG_ERROR: FRAMEBUFFER_INCOMPLETE_ATTACHMENT
allocating FBO 1
OF: OF_LOG_ERROR: ofGetGlFormatAndType(): glInternalFormat not recognized returning glFormat as glInternalFormat
OF: OF_LOG_ERROR: FRAMEBUFFER_INCOMPLETE_ATTACHMENT
allocating FBO 2
OF: OF_LOG_ERROR: ofGetGlFormatAndType(): glInternalFormat not recognized returning glFormat as glInternalFormat
OF: OF_LOG_ERROR: FRAMEBUFFER_INCOMPLETE_ATTACHMENT
allocating FBO 3
OF: OF_LOG_ERROR: ofGetGlFormatAndType(): glInternalFormat not recognized returning glFormat as glInternalFormat
OF: OF_LOG_ERROR: FRAMEBUFFER_INCOMPLETE_ATTACHMENT
allocating FBO 4
OF: OF_LOG_ERROR: ofGetGlFormatAndType(): glInternalFormat not recognized returning glFormat as glInternalFormat
OF: OF_LOG_ERROR: FRAMEBUFFER_INCOMPLETE_ATTACHMENT
Anyways, the DOF impact probably won’t effectively perform with obvious things because the level shield only preserves one level value for each pixel. If you had obvious things, you would need several level principles per fragment and still would not know wich to use afterwards. So usually if you have to study again level from a surface or at all visibility is a bad concept since you can only preserve one level value per pixel.