Using OpenGL on a background thread to gracefully handle slow GLSL shaders

I’m writing a shader editor for mobile devices, but since mobile devices are generally slow as shit, it needs some mechanism for gracefully handling complex shaders without slowing the rest of the app. My idea was to have the OpenGL rendering on a separate thread, and if the framerate drops below 5fps or something, the frames get rendered to bitmaps that get smoothly cross-faded inside of hard cutting to the next rendered frame.
Is this possible? I can’t find much on the web…



1 Like

I guess it is possible. Onscreen rendering on background is forbidden, so I’d try to render all the stuff to FBO on a separate thread (ofThread is an option here) and then draw that FBO on screen. I’ve never tried to do this kind of thing on background, but I think it’ll work.

Thanks for the info guys.
I’ve found this, which seems to be exactly what I’m looking for but it’s for iOS.
So I’m guessing it is possible. Has anyone done something like this in openframeworks? It seems that the important part is the serial dispatch queue.
The other problem is that compiling the shaders themselves dynamically is too slow for big shaders to do on the main thread. This seems to be a solution to that. Setting up a shared context then compiling the shader on a separate thread with that context.


Ok guys, you’re right. I’m attempting the impossible it seems. Apparently the shader pass gets executed all in one go by the GPU, so if it’s a huge shader, the GPU is consumed until it finishes. The only thing I can think of is to split the render area into in small tiles, so that there’s a way to slot in other GPU calls. This could work.

Cracked it! I’m writing this in case someone else needs to do the same thing.

So if the GPU is passed a super heavy shader to compute, it has to do it all in one chunk which blocks everything else from using the GPU while it computes, thus slowing down the whole computer.

The solution is to split the shader rendering into small tiles (using viewports and changing the camera position), and render each one in a separate call into to the corresponding place in a frame buffer. Let this go as fast as possible. Then when the frame buffer is full, display it to the screen and wait until the next appropriate time to start the process again, thus maintaining a stable framerate. This splits the shader rendering into a bunch of smaller tasks, allowing other things to squeeze in the gaps for a bit of GPU time. That’s my theory anyway - I still have basically no idea how OpenGL works.

I hope this helps anyone working with brutal shaders who doesn’t them to ruin everything.