Just wanted to ask here before starting to investigate hours and hours !
Would like to go directly through the best way.
Let’s imagine I draw a video texture, and then on front I draw rectangles.
I want to take the actual screen I have and make some pixel modification / analysis ( for ex. change brightness or … find blobs… ) using the getPixels() method.
How can I “mix” the differents elements I have to analyse it ?
Is loadScreenData() the best method ?
I have some doubts as I feel I would start to analyse after the main Draw() method and not in the update().
Any clue would be welcome !
getPixels() is slow on most video cards. this is one problem with OpenGL - it’s designed more for 3D worlds than for 2D compositing work.
if you can get a hold of the pixel data before you draw it to screen, this is best.
as for compositing techniques, you can try messing around with the glBlendFunc() function - it sets the kind of blending method to use. it’s deeply counterintuitive to figure out so best to just experiment.
for reference, normal blending is
glBlendFunc( GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA );
additive blending (real pretty if you add red and green and blue things together) is
glBlendFunc( GL_SRC_ALPHA, GL_ONE );
Sorry for the late reply!
Well, thanks for your answer, but I think that I did not explained myself correctly.
What I need to do is get an unsigned * char of pixels value of differents elements.
For example, in this simple case :
How can I get all this “drawed” pixel data as an unsigned * char ?
Is the best way to use another texture and use loadScreenData() ?
I don’t see another solution, and I feel insure about “drawing” elements while I’m still updating the application - because then I will need to use heavily this data -
Maybe It’s not a common way to do that kind of thing, so tell me If I’m totally wrong!
Thanks a lot,
What you can do is use the OpenGL function called glReadPixels. That returns the pixels of the window…
to get pixels from the screen, you can use ofImage’s grabScreen() to get pixels from the screen and then it’s getPixels() to get those pixels. this is easier then using opengl calls directly since opengl consideres 0,0 the bottom left corner, etc.
using the loadScreenData from a texture is not going to get you pixels, since that’s taking stuff on the graphics card and keeping it on the graphics card. it’s fast, but doesn’t get you the data back to RAM (computer memory).
Yes that’s what I finally made, works well!
I just wanted to know if there was another method ( like merging textures ) in order to no to update - draw - update - draw again.
as far as I know, the only alternative is to draw into frame buffer objects and then use openGL shaders to access and manipulate the texture data.
As I understand it, in this case the texture data remains on the graphics data, and there is no going back and forth between the CPU and graphics card, which gives you extra performance (at the cost of a more complicated development pipeline).
Yup, david is exactly right. If you search for FBO you will find a number of posts (And examples). If you wanted to go down this route your draw() function will be (this is of course quite simplified)
- activate FBO (anything drawn after this will automatically get drawn to the FBO, not the screen)
- draw your video (this goes straight to the FBO, not to the screen)
- draw your triangles (again, goes straight to the FBO)
- draw anything else (again, to FBO)
- deactivate FBO
- activate a fragment shader (more on this later)
- draw the FBO (now the flattened contents of the FBO gets drawn to the screen with the active shader processing it as it gets drawn)
- deactivate shader
All the post processing you want to do is done in the fragment shader, written in GLSL.
As David mentioned, this method will give much better performance because all the data stays on the graphics card, and the post processing is done on the GPU. However it is a bit more complicated. If all you want to do is adjust brightness, blur or other simple effects, then its not *that* complicated. But if you want to do blob tracking or other complex multi-pass algorithms then you are entering gpgpu (http://www.gpgpu.org) and a world of pain For the latter cases I would advise either go with CUDA or wait for openCL before trying that stuff on GPU.
Each day I’m discovering new words !
This fbo idea looks really nice.
I looked for the ofFBOTexture “addon” in the forum, which one seems really easy to use.
But I was wondering why then I can’t access to the ofTexture pixel data as easily I could with an ofImage with getPixels()
Is that the role of the shaders ? - which ones I have to investigate more, still don’t get the concept of it .
Oh I’m sorry, I should post into the beginners section. Doh !
Yea the fbo / shader route won’t give you an unsigned char * of pixels. Your openframeworks application (running on cpu) won’t have access to the pixels at all. Instead, a separate tiny little programming (i.e. shader, running on GPU) has access to the pixels. And not as an unsigned char *. Instead, the way shaders work is, you write a ‘kernel’, a little program that will automatically get called for every pixel - you can think of it as the inside of the for loop you would write if you were looping over all of the pixels.
the fbo / shader approach is conceptually quite different to traditional CPU programming. So i’d recommend going with Zach’s suggestion first, if the performance is good enough for what you need, no point complicating your life with fbo’s and shaders.
Really thanks for all theses helpful precisions…
Can’t wait to try all this new things !
Is there something like “beginners guide” on that what memo was talking about?
i tried getPixels from ofxFBOTexture and failed.
i found this topic here and now I understand the problem.
but is there a way to use OpenCV methods on FBOs? the use of grabScreen seems a little bit dirty because I don’t want to have the interim results on the screen.
okay, now i use grabScreen() in update() before drawing anything else and it is not as bad as i thought.