Fastest method for direct pixel modification


#1

Hey All,

I have an animation algorithm which works by taking all the pixel values on the screen and slightly modifying their colors. Unsurprisingly, for big screens this really chugs, since it performs multiple operations for every individual pixel. Currently I am implementing this by reading the screen into an ofImage, looking at each pixel, and writing new pixels based on those values. I then draw this new ofImage. For fullspeed playback, I have an array of ofImages of every frame preprocessed and draw them for each frame.

I’m wondering whether this is the fastest process or if there are better methods. Does openGL have anything which makes it easier to read pixels without the intermediary object? When doing operations which don’t require any other drawing methods, I’m simply using the previous ofImage, which does help a little.

I saw this exact question hadn’t been asked for a few years and just wanted to check whether I’m thinking about the problem correctly since I’m planning to do a lot with this effect.

Thanks in advance for any help.


#2

My friend… Welcome to the beautiful world of Shaders! :smile:

The ofBook comes with a introductory chapter on this. Also, the gl and shaders example folders that come with OF, let you explore a bit more.

Have a go, play around, and if you have any questions you know where to find us.


#3

Hey this was super helpful, and I’ve been getting the hang of using shaders in OF. The final bit I’m not fully understanding is how to continue using the pixels I’ve modified with a shader. Right now by modifying the 4th shader example that comes with OF, I have my original texture image being randomly deformed in the ways I want. What I’m having more trouble doing is feeding that modified and deformed texture to be the basis for the shader’s transformation next frame.

     shader.begin();
     shader.setUniformTexture("text", *texP, 0);

     shader.setUniform1f("mouseX", ofRandom(0, 5));
     shader.setUniform1f("time", ofGetElapsedTimef());
     ofPushMatrix();
     ofTranslate(ofGetWidth()/2, ofGetHeight()/2);    
     plane.draw();
     ofPopMatrix();
  
    shader.end();  
    plane.resizeToTexture(*texP);

This is the current code I have modified within the draw function. “plane”, “shader”, and “text” are all global variables. I’m guessing the problem is that resizeToTexture is not doing what I hope it’s doing (taking the values changed by the shader and writing them to the texture for the next frame). Any suggestions for best practices here? Should I do this somehow within the glsl?

Thanks so much for the help and friendly response.


#4

Glad to see you have been messing around! :slight_smile:

What resizeToTexture actually does is set a new size (width and height) of the plane based on the size of the texture that you reference.

What I think you want to do, it’s something a bit more advanced, called Ping Pong Buffer. This technique is used when a shader needs it’s result as a source parameter for it’s next iteration. This is done like so because the GPUs can’t write a program’s result to itself.

Once again I’m going to point to an OF’s example. In this case is the gl/gpuParticleSystemExample. I do it so because this example is pretty well commented, and as you’ll see, it’s a bit more extensive and complex.