Remixing multiple videos into single frame w/ masks, alphas, and shaders

Hi everyone,

The project I’m working on requires that we take frames from many different videos, blend them together using masks and alpha transparency, and display this “remixed” frame onto a screen in real time. Essentially we are layering and blending multiple videos together onto the same screen area.

The forum has a related question on this (,-and-alpha-channels/5740/0) but no clear solution as to how to accomplish this ever surfaced from the conversation.

Our team decided to use shaders to perform the remixing efficiently, but has anyone worked with shaders well enough to know how to achieve this using them? I can manipulate one frame from one video at a time using a shader to output a modified frame, but I don’t know how to manipulate multiple frames in a shader to output a single remixed frame.

Even pointers to the name of a relevant addon or a link to a relevant post would be enough to point me in the right direction, if you don’t have time to explain in detail.

Thanks in advance!

Sounds like a job for ofFBOtexture. (I’m just starting a project like this myself, let me know if you want to trade notes as we go.)

I found this on the forums yesterday and glanced at the code, seems to be doing something similar to what I want, that is, getting texture data from two difference sources, and remixing them in a shader.’s/419/42

I did something similar. Just two videos where 1 masked the other. But it could be expanded easily.

Create the frag and vert shader files with this. Just create a txt file and change the extension to vert and frag, both with the same name and save in the data folder of your OF program.


uniform sampler2DRect Tex0, Tex1; //these are our texture names, set in openFrameworks on the shader object in set up  
void main (void)  
	//sample both textures  
	vec4 image = texture2DRect(Tex0, gl_TexCoord[0].st);  
	vec4 composite = texture2DRect(Tex1, gl_TexCoord[1].st);  
	//use the color from the image, but use the r channel of the mask as the alpha channel of our output  
	gl_FragData[0] = vec4(image.rgb,composite.r);    


void main(void)  
	//this is a default vertex shader all it does is this...  
	gl_Position = ftransform();  
	//.. and passes the multi texture coordinates along to the fragment shader  
	gl_TexCoord[0] = gl_MultiTexCoord0;  
	gl_TexCoord[1] = gl_MultiTexCoord1;  

//add to your OF project  
//Add the following to testApp.h  
ofImage colorImg;  
ofImage colorImg2;  
int x,y,w,z;  
//add all this to testApp.cpp  
void testApp::setup(){  
colorImg.allocate(w, h, OF_IMAGE_COLOR);//you must adapt this depending on your needs.  
colorImg2.allocate(w, h, OF_IMAGE_COLOR);// if you are using a video grab the video texture  
void testApp::draw(){  
		glMultiTexCoord2d(GL_TEXTURE0_ARB, 0, 0);  
		glMultiTexCoord2d(GL_TEXTURE1_ARB, 0, 0);		  
		glVertex2f( x, y);  
		glMultiTexCoord2d(GL_TEXTURE0_ARB, colorImg.getWidth(), 0);  
		glMultiTexCoord2d(GL_TEXTURE1_ARB, colorImg2.getWidth(), 0);		  
		glVertex2f( w, y);  
		glMultiTexCoord2d(GL_TEXTURE0_ARB, colorImg.getWidth(), colorImg.getHeight());  
		glMultiTexCoord2d(GL_TEXTURE1_ARB, colorImg2.getWidth(), colorImg2.getHeight());  
		glMultiTexCoord2d(GL_TEXTURE0_ARB, 0, colorImg.getHeight());  
		glMultiTexCoord2d(GL_TEXTURE1_ARB, 0, colorImg2.getHeight());		  
		glVertex2f( x, h);  

tagging along to this I’ve updated the example where that shader came from:

Should work with 007 now and has a project file attached to it.