Using grayscale video as a mask...

I have a grayscale video: a silhouette of a dark figure in the foreground, with a white background. I would like to save each frame of this video as an alpha mask (i.e. record the brightness value of each pixel, and use the resulting texture as a mask to mask another image).

I know I can store this information in an RGBA texture, but this seems like a waste of space, because instead of needing RGBA values for each pixel, I really only need a A value.

Is there some way to accomplish this?

Thanks!

Jonathan

there is a type “GL_LUMINANCE_ALPHA” which is basically 2 bytes per pixel (one grayscale, one alpha) which could be helpful… saves alot of space in graphics card memory. We use this type, for example, in ofTrueTypeFont since we actually need different values for RGB then for A (ie, pixels of a font’s char are transparent or visable in alpha, but all the same RGB value)…

the routine at the end of this thread might also help -
http://forum.openframeworks.cc/t/masking-an-image-with-another-image/339/0
I think it works on high end cards, and allows you to draw as an alpha mask…

hth !

  • zach

Thanks, Zach!

GL_LUMINANCE_ALPHA did the trick.

Also, I found this nice tutorial that helped explain how this works:
http://nehe.gamedev.net/data/lessons/le-…-?lesson=09

And here’s the relvant code, in case anyone else wants to do this:

glEnable(GL_TEXTURE_2D); // Enable Texture Mapping
Shading
glClearColor(0.0f, 0.0f, 0.0f, 0.5f); // Black Background
glClearDepth(1.0f); // Depth Buffer Setup
glBlendFunc(GL_SRC_ALPHA,GL_ONE); // Set The Blending Function For Translucency
glEnable(GL_BLEND);

Take care!

Jonathan

hi,

i’ve got a similar problem and i can’t seem to get it right. i have a generated ofTexture which is black and white that i want to use as a mask for an ofImage; anything over the white pixels goes through, anything over the black ones does not. i tried a bunch of things and i can’t seem to get it right…

the closest i got was with the following code, but it does the opposite; the black pixels are the ones letting the image through.

  
  
glDisable(GL_BLEND);  
myTexture.draw(0, 0, ofGetWidth(), ofGetHeight());  
glEnable(GL_BLEND);  
glBlendFunc(GL_SRC_ALPHA, GL_ONE);  
myImage.draw(0, 0, ofGetWidth(), ofGetHeight());  
  

any ideas?

thanks!

Hi everyone,

I received a private email from someone who was struggling with this problem, and thought I’d post my reply in the forum so others can use it.

Anyway, this alpha blending stuff can be a real pain to get right! I remember I did a lot of tweaking and experimenting before I found the right combination.

Here are some links that may or may not be useful:

This page explains the blending modes of OpenGL, and how they work:
http://pyopengl.sourceforge.net/documen-…-nc.3G.html

If you’re trying to do alpha masking over video, I seem to remember that the OF VideoPlayer class didn’t support alpha pixels, only RGB pixels. So I hacked the ofVideoPlayer class to make an alphaVideoPlayer class, which I posted here:
http://number27.org/download/OF/AlphaTextureVideo.zip

I also included a MyTexture class in there, which I seem to remember needing to use instead of the ofTexture class. Sorry I can’t give more specific guidance here, but it’s been about a year since I looked at this code, and I have a million things going on right now, so don’t really have time to get back into it.

Anyway, hope that’s somewhat helpful! Good luck!

Jonathan

It’s not incredibly intuitive, but I did find this app helpful once to figure out a blendmode that I needed:

http://www.embege.com/blendinspect/

Source GL_ONE with destination GL_ZERO is the same as turning off blending.

/A

this was my solution for masking a video:

in setup:

cam_mask.loadImage(“images/interface/cam_mask.png”);
cam_mask.setImageType(OF_IMAGE_GRAYSCALE);
alphaPixels = cam_mask.getPixels();
kamerabild.allocate(cam_mask.width, cam_mask.height, GL_RGBA);

in update

colorPixels = vidGrabber.getPixels();
for (int i = 0; i < 640; i++){
for (int j = 0; j < 480; j++){
int pos = (j * 640 + i);
pixels[pos*4 ] = colorPixels[pos * 3+1];
pixels[pos*4+1] = colorPixels[pos * 3+1];
pixels[pos*4+2] = colorPixels[pos * 3+1];
pixels[pos*4+3] = alphaPixels[pos];
}
}

kamerabild.loadData(pixels, 640, 480, GL_RGBA);

This was exactly the problem I was facing at the moment. I needed to mask the image of a webcam, so its shape was no longer rectangular, but an irregular shape defined by an image.

I have found a nice, real-time solution to it by using multi-texturing. Since this mechanism was defined in OpenGL 1.3, I am pretty sure it should work on most if not all graphics hardware.

I will post the draw function of my sample app here. It contains the most important code. The full source can be downloaded here. An example using a webcam is available here.

  
  
void testApp::draw(){  
	background.draw(0, 0);  
  
    // make sure alpha blending is enabled  
	ofEnableAlphaBlending();  
  
    // set up multi-texturing  
	glActiveTexture(GL_TEXTURE0);  
	masked.getTextureReference().bind();  
    glTexEnvf (GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_COMBINE_EXT);  
    glTexEnvf (GL_TEXTURE_ENV, GL_COMBINE_RGB_EXT, GL_REPLACE);  
  
	glActiveTexture(GL_TEXTURE1);  
	mask.getTextureReference().bind();  
    glTexEnvf (GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_COMBINE_EXT);  
    glTexEnvf (GL_TEXTURE_ENV, GL_COMBINE_RGB_EXT, GL_BLEND);  
  
	// render masked and mask images as one   
	glBegin(GL_QUADS);  
	glMultiTexCoord2f(GL_TEXTURE0, 0.0f, 480.0f);  
	glMultiTexCoord2f(GL_TEXTURE1, 0.0f, 480.0f);  
	glVertex3f(0, 480, 0);  
	glMultiTexCoord2f(GL_TEXTURE0, 0.0f, 0.0f);  
	glMultiTexCoord2f(GL_TEXTURE1, 0.0f, 0.0f);  
	glVertex3f(0, 0, 0);  
	glMultiTexCoord2f(GL_TEXTURE0, 640.0f, 0.0f);  
	glMultiTexCoord2f(GL_TEXTURE1, 640.0f, 0.0f);  
	glVertex3f(640, 0, 0);  
	glMultiTexCoord2f(GL_TEXTURE0, 640.0f, 480.0f);  
	glMultiTexCoord2f(GL_TEXTURE1, 640.0f, 480.0f);  
	glVertex3f(640, 480, 0);  
	glEnd();  
  
    // properly unbind the textures  
    mask.getTextureReference().unbind();  
	glActiveTexture(GL_TEXTURE0);  
	masked.getTextureReference().unbind();  
	  
	// disable alpha blending again  
	ofDisableAlphaBlending();  
}  
  

When applying this to your own projects, make sure to use the correct texture coordinates.

Paul

PS: it is worth noting that the mask image is all white (rgb = 255, 255, 255) with an 8-bit alpha channel. Using other colors than white will filter the colors of the masked image.

masking_webcam.zip

In order to create an alpha mask with video, I am just trying to put two textures in one quad or two quads. Mask one texture with alpha channel of the other. I am having problem with texture coordinates. The mask image is 1024x768 pixels and video size changes each instance.

It works perfectly fine if video and mask are same size, but if I want to make video a different size than mask, I am having trouble placing it correctly on the screen. The coordinates are very weird.

Any ideas?

the code is same as stated before here:

  
  
glActiveTexture(GL_TEXTURE0);  
video.getTextureReference().bind();  
glTexEnvi (GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_COMBINE_EXT);  
glTexEnvi (GL_TEXTURE_ENV, GL_COMBINE_RGB_EXT, GL_REPLACE);  
  
glActiveTexture(GL_TEXTURE1);  
maskImage->getTextureReference().bind();  
glTexEnvf (GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_COMBINE_EXT);  
glTexEnvf (GL_TEXTURE_ENV, GL_COMBINE_RGB_EXT, GL_BLEND);  
  
  
glBegin(GL_QUADS);  
glMultiTexCoord2i(GL_TEXTURE0, 0, 0);  
glVertex2f(x, y);  
glMultiTexCoord2i(GL_TEXTURE0, oWidth, 0);  
glVertex2f(x+width, y);  
glMultiTexCoord2i(GL_TEXTURE0, oWidth, oHeight);  
glVertex2f(x+width, y+height);  
glMultiTexCoord2i(GL_TEXTURE0, 0, oHeight);  
glVertex2f(x, y+height);  
  
glMultiTexCoord2i(GL_TEXTURE1, 0, 0);  
glVertex2f(0, 0);  
glMultiTexCoord2i(GL_TEXTURE1, 1024, 0);  
glVertex2f(1024, 0);  
glMultiTexCoord2i(GL_TEXTURE1, 1024, 768);  
glVertex2f(1024, 768);  
glMultiTexCoord2i(GL_TEXTURE1, 0, 768);  
glVertex2f(0, 768);  
glEnd();  
  

Looks like you got your vertices wrong. See the example in the post above yours: Supply just 1 vertex for every 2 texture coordinates. This way the graphics card understands how to layer the textures. So:

glMultiTexCoord2i( *top-left texture coord of video* );
glMultiTexCoord2i( *top-left texture coord of mask* );
glVertex2f( *top-left coordinate of your object* );

Also note that, depending on wether you use ARB textures or not, texture coordinates are either expressed as absolute pixels (like in your example), or percentages (from 0.0 to 1.0). Percentages are calculated as:

percentage = width of texture in pixels / ofNextPow2(width);

and

percentage = height of texture in pixels / ofNextPow2(height);

Also note that, depending on wether you use ARB textures or not, texture coordinates are either expressed as absolute pixels (like in your example), or percentages (from 0.0 to 1.0). Percentages are calculated as:

percentage = width of texture in pixels / ofNextPow2(width);

and

percentage = height of texture in pixels / ofNextPow2(height);

also, in 0.061 we’ve introduced two helper functions for getting coordinates in ofTexture:

ofPoint getCoordFromPoint(float xPos, float yPos);
ofPoint getCoordFromPercent(float xPts, float yPts);

both return *proper* values depending on if the texture is ARB or not, ie, it will return from 0-1 if it’s a non-arb and 0-w if it’s an ARB. You can supply either a percentage or a point and get back the coordinate in the appropriate scale. it’s great for texCoord stuff, drawing a subregion of a texture, etc if you don’t know if it’s arb or not.

take care,
zach

1 Like