right now i’m trying to mask an image with another image… specifically, it’s like i want to see through a donut shape to an image behind it; it’s as if we’re looking through the letter ‘o’ to see another image within that shape. i’m not quite sure if i should be trying to figure this out with a png file or ofCircle…???
Hmm, it can be quite easily done if you are using the OpenCV addon.
Otherwise I guess you could do a big loop through all the pixels of the image, only copying pixels from the source image into the destination if that pixels in the mask has a value above 0. Thats sort of ish what i did with out-of-bounds
A very easy way of doing this would be to use a doughnut image with an alpha mask (.tga files allow alpha channels) and then turn alpha blending on…
Can’t remember if there’s an example of this in the OF code, but I think there is.
oh, thanks! the image with the alpha channel makes a lot of sense, but i am masking a live video feed, so i will probably loop through pixels and only copy the ones i want to display…
grimus is right.
Its hard to know what you need without knowing more about what you want to do.
If you just want to display a video but only within a masked area, then draw the video first and then draw your circle image over the top (filled around the outside with a transparent space in the centre where the mask will be).
This will still work with a live video. Let us know if you need help.
hi -
I am totally running so this will be very quick
I threw an example here:
openFrameworks.cc/files/examples/imageMixerCode.zip
which mixes an rgb image and a grayscale image into an RGBA texture.
it’s not so bad to do it this way, but we will think about adding some mask functionality to OF image –
there is also some advanced stuff I’ll try to post later today about using the alpha channel in opengl to control the opacity per pixel of stuff that is drawn on top - it allows you to do elaborate, opengl (ie, vector) based masks, and to mask out multiple things – it only works on higher end graphics cards, but it’s a pretty cool trick and allows for *alot* of flexbility in terms of drawing…
hope that helps !!
zach
thanks zach, that is super helpful! unfortunately, what i’m doing is finding the x & y centroid and attaching a ‘donut’ image on top of a duplicated fullscreen live video image – so this method won’t work since the size of the png is always smaller than the size of the video. i’ll look into the advanced per pixel opacity rendering, as i think that’s the only way to achieve the effect i’m after.
well then maybe this advanced thing will work
I think (but haven’t tested) that this might be slow on some machines…
glDisable(GL_BLEND);
glColorMask(0, 0, 0, 1);
glColor4f(1,1,1,1.0f);
ofCircle (50,50,30);
glColorMask(1,1,1,0);
glEnable(GL_BLEND);
glBlendFunc( GL_DST_ALPHA, GL_ONE_MINUS_DST_ALPHA );
ofSetColor(0xff0000);
for (int i = 0; i < 400; i++){
ofLine(ofRandom(0,300), ofRandom(0,300), ofRandom(0,300), ofRandom(0,300));
}
it draws a circle in the alpha channel (it could be an image also, or have variable brightness…) although, because there is no blend when you draw the first thing / circle, etc, it lends itself really well to images serving as masks, and those images can move wherever!
the output:
hope this helps -
take care!
zach
what about fading between two images that are being used as masks?
brush1 and brush2 are .png files with alpha channels. the variable fadeAmt goes from 0 to 255. instead of fading between two masked-out solid colors, the images lose their intensity (the cross-faded mask goes to a grey color instead of staying black and white).
thanks,
jeremy
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
// draw first image
ofSetColor(190, 10, 0, fadeAmt);
brush1.draw(250, 200);
// draw second image
ofSetColor(190, 10, 0, 255-fadeAmt);
brush2.draw(250, 200);
Hi,
Can this be achieved in DirectX too?
about directx - I have no idea – we are opengl people.
there is info here: http://en.wikipedia.org/wiki/Comparison-…-d-Direct3D
but because we run on linux and mac, we use opengl
about the mixing, I think you’d be quicker to computationally mix the images in an array and upload to a texture. You asked me this at eyebeam, and I have to say that I am not aware of a way using blend modes to crossfade two alpha images cleanly –
the problem is that you are blending the first image’s pixels (as you decrease the opacity) with the BG, and the second image pixels with the (first + bg) so that it’s impossible to blend the two without including (in some way) the BG pixels…
I’m looking for a solution using some more advance blending techniques like (http://www.opengl.org/registry/specs/EX-…-parate.txt)
but I don’t know offhand and I suggest doing some pixel hacking – mixing the pixels by hand and uploading to a texture.
take care!!
zach
How do you guys find the speed when you try and read alpha pixel values from a screen using openGL? I mean afterall, we are using the same 3D graphics card. Only thing is different language.
I know that in DirectX it is usually quite slow if we try locking the backbuffer. And a lot of people don’t recommend this.
I’m not sure I understand your question, but readbacks are always expensive, because the connection to the card is a bottleneck compared to moving things around on the card. So uploading data from ram into textures or reading pixels back into ram will always be slower then what you can do with things already on the card… most of what we are talking about is happening on the card itself though…
thanks!!
zach
hey zach,
i was able to get my code working using the accumulation buffer. first i had to go into the source code for ofAppRunner.cpp and enable GLUT_ACCUM in glutInitDisplayMode(). it’s more expensive to run, but it does the trick.
jeremy
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
// clear screen
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
ofSetColor(190, 10, 0);
//draw first image
brush1.draw(250, 200);
// draw partially into accumulation buffer
glAccum(GL_ACCUM, 1.0-fadeAmt);
ofSetColor(190, 10, 0);
// clear screen
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
// draw second image
brush2.draw(250, 200);
// add (inverse of xfade) to accumulation buffer
glAccum(GL_ACCUM, fadeAmt);
// return accumulation buffer
glAccum(GL_RETURN, 1.0);
[quote author=“mantissa”]what about fading between two images that are being used as masks?
brush1 and brush2 are .png files with alpha channels. the variable fadeAmt goes from 0 to 255. instead of fading between two masked-out solid colors, the images lose their intensity (the cross-faded mask goes to a grey color instead of staying black and white).
thanks,
jeremy
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
// draw first image
ofSetColor(190, 10, 0, fadeAmt);
brush1.draw(250, 200);
// draw second image
ofSetColor(190, 10, 0, 255-fadeAmt);
brush2.draw(250, 200);
[/quote]
I tried this piece of code, apparently the alpha is not working. I don’t see a circle mask as in your diagram, I see a square instead. Any ideas what might be missing?
you’ll need to enable blending first
either with
ofEnableAlphaBlending()
or
glEnable(GL_BLEND)
jeremy
Sorry guys, I pressed the “Quote” button and Mantissa’s code accidentally got in.
It was meant to be:
Zach wrote:
well then maybe this advanced thing will work
I think (but haven’t tested) that this might be slow on some machines…
Code:
glDisable(GL_BLEND);
glColorMask(0, 0, 0, 1);
glColor4f(1,1,1,1.0f);
ofCircle (50,50,30);
glColorMask(1,1,1,0);
glEnable(GL_BLEND);
glBlendFunc( GL_DST_ALPHA, GL_ONE_MINUS_DST_ALPHA );
ofSetColor(0xff0000);
for (int i = 0; i < 400; i++){
ofLine(ofRandom(0,300), ofRandom(0,300), ofRandom(0,300), ofRandom(0,300));
}
I tried this piece of code, apparently the alpha is not working. I don’t see a circle mask as in your diagram, I see a square instead. Any ideas what might be missing?
about the non working code - it might be graphics card dependent … not entirely sure. others want to check it ? thanks! -z
I manage to get the previous example “imageMixerCode” working though.
Of course I see that “imageMixerCode” is manual pixel manipulation.
Hi,
Reading this thread, my first reaction is to try to use an image library, like CImg, to do the image manipulations and then copy the image into an OpenGL texture for display. From what I can tell, OF does pretty much the same thing to copy from a FreeImage image to an OpenGL texture behind the scenes when you call ofImage::draw.
CImg (http://cimg.sourceforge.net/) looks pretty good to me, but I’m not sure how it performs – if it would be fast enough to do real-time image manipulations. I did some similar stuff using the Python Image Library in Python, but it wasn’t even close to real-time. The program rendered each frame one by one and stored the frames. I then used ffmpeg to wrap them into a video.
-Brian