alphaMask shader example and texture/image size/dims problem

I’m working off the 05_alphaMasking example and having trouble making my image mask allign properly with the texture i am processing.

i’ve built an app where live camera input is recorded, and then the alpha-mask shader is applied.
my live input is 720p:

and i have a mask that is also 1280x720:

their aspect ratios match.

however, when i apply the shader like this:

        ofPixels & pixels = vidGrabber.getPixels();
        shader.setUniformTexture("imageMask", imageMask.getTextureReference(), 1);
        ofTexture livefeed;
        livefeed.draw(xPos - camWidth * scale/2, yPos - camHeight * scale/2, camWidth * scale, camHeight * scale);

the image mask gets cut-off along the top and the bottom:

am I missing a step that would assure that my image-mask texture matches the size/dims of the texture that i’m processing?

I should add the example project itself, included in OF 0.98, seeps to cut off the mask as well.

It’s mask file is like this in photoshop:

Oddly, when i preview the file in the Finder on OS x, it looks like this:

notice that the top/bottom are cut off.

and the processed texture with the built OF example looks like this (also cut off)

not sure what’s going on but:

  • you don’t need to upload the video player pixels to a texture, the video player itself contains a texture so drawing it is enough

  • also there’s a simpler way to do an alpha mask, in setup:


Thanks @arturo, that is very helpful.

I’m working on a more complex shader that does the alpha masking as well as a few other things; i’m only testing with this shader.

I found the issue with the alpha mask not aligning. I remade a new .png at 1280x720, and now the shader example works fine. Unfortunately, I can’t tell why. Here is the “good” image mask; the “bad” one is the one that comes with OF 0.98.

As for the simpler way of doing an image mask:


Is this as computationally efficient as having an image mask run in a shader:


And one more question:

I’ve reimplemented the simpler method that @arturo suggested, but i find that my video-playback from a buffer works when i use videoGrabber.getPixels() and does not when i use vidGrabber.getTexture();

This one records a stream of different video frames:

ofPixels & pixels = vidGrabber.getPixels();
myVideoFrames[index].allocate( thisPixels);

This one seem to put the latest video frame in all the indices of my buffer:

ofTexture & thisTexture = vidGrabber.getTexture();
myVideoFrames[index] = thisTexture;

It seems that the ofTexture method is maintaining a reference to a texture which is constantly being updated when i grab new frames from the camera; whereas the ofPixels method is copying the new video frame.
I may be making a mistake with pointers and dereferences, or maybe i’m missing a bigger picture?

yes this method is as fast as using a shader.

and yes you can’t copy a texture like that, when you do that with gl objects you are just making a reference not a full copy so if you need to store previous frames you’ll need to keep the frames or draw the texture into an fbo to copy the texture


I thought that i was “keeping the frame” by doing this

myVideoFrames[index] = thisTexture;

but i’m obviously not.
how do I make sure to keep/copy the frame if my goal is to put previous frames in a specific index of a vector<ofTexture>?

should i be storing frames in a vector<ofFBO> instead?

as i said, you need an fbo in order to copy a texture, instead of a vector of textures create a vector of fbos and draw the texture into them

but if you have the pixels anyway and have to upload them (the videoplyaer does upload them internally) it’s probably not worth, just upload the pixels to each texture. if you want to avoid 2 uploads jhust disable the texture in the player using videoplayer.setUseTexture(false)

Great, thanks a lot @arturo, this is starting to make sense.

If my goal is to

  1. store many frames in an array of textures/fbos, and
  2. process them with shaders

is there an advantage/disadvantage to one of the two techniques (i.e. using 'vidGrabber.getPixels()or 'vidGrabber.getTextures())?

no not really, the only difference would be in speed if you had the texture but not the pixels but since you already have the pixels any of them should be really similar

Ok, so if i manage to make my application without any .getPixels() and .allocate(ofPixels thePixels), but rather all with textures, there will be a speed advantage.

I’m looking at the ofFBO documentation and wondering what method i use to copy the texture from my vidGrabber to the fbo?



you don’t need to allocate any pixels, just create a vector of textures and each frame load the pixels from the videoplayer into a new texture:


if you want to use an fbo just draw the texture into it.


I’m trying to implement this but i can’t get it show the mask:

in ofApp.h:

ofVideoGrabber vidGrabber;
ofTexture videoTexture;
ofImage alphaMaskImage;
ofTexture alphaMaskTexture;

in ofApp::setup():

vidGrabber.initGrabber(camWidth, camHeight);

alphaMaskTexture = alphaMaskImage.getTexture();
//i tried doing one of these, or both of them; no effect

And then in ofApp::update() and ofApp::draw()

void ofApp::update(){
    ofBackground(100, 100, 100);
        videoTexture = vidGrabber.getTexture();


void ofApp::draw(){
    vidGrabber.draw(20, 20);

no mask visible.

What am i doing wrong?

can you try to reset the mask after every new frame?

If by reset the mask you mean, run this line during update instead of setup:


I tried that and same results.

Here is another test that has confusing results, but may reveal something:

I made a simple test app called wecam2syphon that does the following:

  • Grab frame from camera as texture; apply mask while grabbying
  • Draw grabbed texture onto OF window
  • publish grabbed texture on syphon so I can view it with Syphon Simple Client

Here is what is drawn in OF; noticed that the mask IS applied, but the video frame from camera is not there

Here is what the Syphon Client receives: notice that the camera image is there, but the mask is not:

this works for me:

class ofApp: public ofBaseApp
		ofVideoGrabber video;
		ofFbo fbo;
	void setup(){

	void draw(){

int main()
	ofWindowSettings settings;
	ofRunApp(new ofApp);

i’m using an fbo to not have to create the texture but an image should do too, take into account that the mask comes from the colors not form the alpha, also sending the texture through syphon won’t work since the mask is not really applied to the texture is just drawn using it

Depending on GL version (mine is v2.1) on oF 0.93 and above the following line;
should be as follows;
fbo.allocate(640,480,GL_RGBA); // or GL_RED