glReadPixels causes a memory leak on ATI Cards

Hello ,

i’ve got a serious problem with a glReadPixels call that causes a Memory Leak.

glReadPixels(0,0,width, height, GL_RGB, GL_UNSIGNED_BYTE, gpuReadBackBuffer);   

The gpuReadBackBuffer size is : gpuReadBackBuffer = new unsigned char[(width*height)];

I’m trying to make an Offscreen Rendering on every update() and grab the Screen to an ofImage .
So i write the data into an Image:

gpuReadBackImageGS.setFromPixels(gpuReadBackBuffer, width, height);  

, where gpuReadBackImageGS a ofxCvColorImage is.
So i make some OpenGL drawing calls and then grab the screen. My frst attempt with an FBO ended with the memory-problem at the glReadPixels call, so i switched to another solution with GrabScreen(), but realized that internaly a glReadPixels call is made, so i’ve got the same problems.

Has sombody else some advices i have to concern with. The issue seems an ATI problem, as on NVIDIA Cards there are no Memory problems.

I’m getting nuts with this and find no approach. I can’t imagine that offscreen rendering on ATI Cards are so buggy.

edit: a friend of mine told me that he has the problem with the newest Catalyst driver 9.12 and the Memory Leak disappeard when he switched back to 9.4 . :roll:

are you deleting the pixels after you are done using them? that might be a stupid question, so I am asking just in case :slight_smile:

Another thing, I am pretty sure, that all the openGL calls should be in the draw() function as far as I remember. Have you tried putting everything in the draw() function?

Hi Moka,

i initialize a buffer and a image at application start and use them all the time.
It’s seems to be an Driver problem. Another FBO example from this site: also causes a memory leak on some Computer with a ATI Card. I’ve made some test with a greater buffer size, but there is no difference.
Is it important to set explicit the readbuffer with glReadBuffer ?
I think the only mistake i could make with glReadPixels is to read to much data into a too small buffer, but i thought the buffer must be large enough with a value like [application.width*application.height*3] +1 for an RGB image with the size of [application.width*application.heigh] , when i read the pixels from the normal Framebuffer.