Realtime video compositing

I need advice about realtime video compositing…
The goal is to add text before or during the video playback, as similar in this flash video:

Please if someone can give me guidance…


…add text at a specific time on a specific location plus text motion tracking

Ok, what is the best suited addon (method) for blending video with image?

it’s generally not a good idea to put an exe online to show someone something – although we are a trustworthy bunch, it’s usually better to share code or a video (screengrab, etc) of the program running. Do you think you can upload something so it’s easier to answer your question?

Here is the effect what I need to achieve but instead of image, the background would be a video…

What am I looking for? Displacement shader?

blend mode: linear burn
bevel & emboss

Hi there!

Yes, you can use a displacement shader. But, only with you have video-feed with depth texture. (Like a Kinect.)
If not, maybe you can use a combination of keying and CV to reduce the frames to areas of interest, and use HSB values to displace. It would depend on the kind of footage and effect you want.

How to pass font texture to a shader to apply some effects?


myFont.drawString(message, 100, 200);

shader.setUniformTexture(“tex1”, fontTex, 1);

shader.setUniformTexture(“tex1”, myFont.getFontTexture(), 1) doesn’t work !!!

I’m on OF v0073_win_cb…

solution with FBO

ofClear(255, 255, 255, 0);
ofSetColor(255, 0, 0, 150);
oFont.drawString(message, 100, 200);

shader.setUniformTexture(“tex0”, fbo.getTextureReference(), 0);

Here is the first attempt (source without exe):

everyone is welcome with interesting ideas and shaders…

text outline / border

When some effects are applied to video texture with fragment shaders in ofxFX addon, the framerate is 10-15 fps (1280x720)…

Which video player (addon) do you recommend for windows to improve performance?

If that lower frame rate only happens with some shaders, I believe your issue is with the GPU. Usually (but it depends on OS, codecs, etc), video playback only uses CPU. Maybe some shaders are not that optimised.

I always get a black screen in this way ?!*?

void testApp::setup(){
     width = 1280;
     height = 720;


     mult.allocate(width, height);
     bloom.allocate(width, height);
     gaussianBlur.allocate(width, height);
     bokeh.allocate(width, height);


     if (dir.size()>0) {

     selection = -1;

void testApp::update(){
    if (video.isFrameNew() ){

        if ( selection == -1 ){         //  NO FILTER
        } else if ( selection == 0 ){   // BLOOM
            bloom << lut ;
        } else if ( selection == 1 ){   // GAUSSIAN BLUR
            gaussianBlur.setRadius(sin( ofGetElapsedTimef() )*10);
            gaussianBlur << lut;
        } else if ( selection == 2 ){   // BOKEH
            bokeh.setRadius(abs(sin( ofGetElapsedTimef() )*10));
            bokeh << lut;


void testApp::draw(){
    ofBackgroundGradient(ofColor::gray, ofColor::black);
    ofTranslate(ofGetWindowWidth()*0.5f, ofGetWindowHeight()*0.5f, 0);
    mult.draw(-width*0.5,-height*0.5, width, height);

Yes, because you are not loading the lookup tables. Check the original example from the add-on.

…missing this line in setup()

map is ofImage ???
When this line is included I can’t get 1280x720 video resolutiom. What is the look up table in this case?

Lookup tables (more correctly 3D lookup tables) are a color space mapping technique (to basically do color correction, grading, etc.).

I’m not that familiar with ofxFx, but I know it can use LUTs. And since you weren’t loading the LUT files, you were mapping everything to zero. Thence the black screen. Did you also copy the LUT files to your bin folder? The .cube files.

Yes, I have copied all LUT files. I think that is not a problem in LUT, but could be an issue with ofxMultiTexture (mult)…

uniform sampler2DRect tex0;
uniform sampler2DRect tex1;
uniform sampler2DRect tex2;
uniform sampler2DRect tex3;

void main (void) {
vec2 pos = gl_TexCoord[0].st;
vec4 mask = texture2DRect(tex0, pos);
vec4 rTxt = texture2DRect(tex1, pos);
vec4 gTxt = texture2DRect(tex2, pos);
vec4 bTxt = texture2DRect(tex3, pos);
vec4 color = vec4(0.0,0.0,0.0,1.0);
color = mix(color, rTxt, mask.r );
color = mix(color, gTxt, mask.g );
color = mix(color, bTxt, mask.b );
gl_FragColor = color;

I just need to paste alpha texture on a video texture…

void testApp::draw(){
	ofClear(0, 0, 0, 1); // we clear the fbo.
    shader.setUniformTexture("image", image, 0);
    shader.setUniform1f("red", (float) mouseX / ofGetWidth());
    shader.setUniform1f("a", (float) mouseY / ofGetHeight());


uniform sampler2D image;
uniform float red;
uniform float a;

vec2 TexCoord0;
vec4 texColor;

void main() {
    texColor = texture2D(image, TexCoord0.xy);
    gl_FragColor = vec4(red, 0.0, 0.0, a);    

What is missing in the shader to discard alpha pixels? I’ve got a red rectangle instead of transparent tribal…

A few things. But first, when using sampler2D and texture2D on the shaders, that means that you have disabled rectangles textures. (I don’t know if you did it but you have to call ofDisableArbTex(); on the setup.) When doing this, texture coordinates will be normalized (0.0 to 1.0) instead of being dimension-dependent.

Now, the shaders. By doing this:

You are getting nothing. In GLSL 120, to get the texcoords from the vertex to the fragment shader, you have to do something like this on the vertex:

#version 120

   varying vec2 TexCoord0;

void main(){
    TexCoord0 = gl_MultiTexCoord0.xy;
    gl_Position = ftransform();

The varying means that the fragment will receive it interpolated. On modern openGL, you have out and in to do the same thing. So, in the fragment shader, you also have to declare it with varying:

#version 120

uniform sampler2D image;
uniform float a;

varying vec2 TexCoord0;
vec4 texColor;

void main() {
    texColor = texture2D(image, TexCoord0);
    texColor.a = min(texColor.a, a);
    gl_FragColor = texColor;    

Now that you have your interpolated texcoords, you can apply them to your image.

On the second line, in the main function, you will get the minimum between the source image alpha (either 0.0 or 1.0) and your mouse input (a range from 0.0 to 1.0). Which I think it was what you wanted: to fade the image.

Perfect explanation !!!

I can now change the color and fade the image…
Hubris, if you have by chance seen any of displacement fragment shader, please point me to the link…


You can check this tutorial on openFrameworks’s website. It’s the Textures as Data (e.g. Displacement) example.