Hello there. I had a quick look around and I’m not sure this question has been asked. Im writing an application in Openframeworks based on the CameraGrabber example and the VSM shader example found here: http://www.fabiensanglard.net/shadowmappingVSM/
So far it works quite well but I’m looking at the blur stage. Essentially, the lightmap is rendered to fill the entire offscreen buffer. This buffer has a blur shader applied to make it all nice. However, there is also a scale factor that is applied. This scale factor seems to make the texture very dark; the higher the factor, the darker it gets.
I played around some more. It turns out that glOrtho() doesnt like OpenFrameworks (not sure why) so I turned that off and attempted to render the polygon with the texture on it. Oddly enough, i didnt see very much. I then figured out that the world space was different thanks to calling glLoadIdentity(). Rather than draw a polygon from 0,0 to the extents of the screen, I had to draw a polygon from -1 to 1 on X and Y axes
THe point of the second bit is that when the texture was draw from 0,0 to extents the texture was fine in brightness if a little too zoomed in. when i got the coordinate mapping right, the texture is too dark. the same occurs when the blur_coefficient is set to 1; i.e when the blur texture is the same size as the unblurred texture buffer.
This seems to me to suggest that resizing textures is causing darkness to occur somehere. I cant really figure it out. Will try to post code later.