Mask the camera input to pass a tracking area to openCV

I receive the image from a camera. I’m using a shader (from the multiTextureShaderExample) to mix the camera input texture with a png that has some alpha to filter out a camera region.

I defined WHD =1280
and HHD = 720

The starting part is this:

    //finder setup
    finder.setMinAreaRadius(20);
    finder.setMaxAreaRadius((WHD*HHD)/3);
    finder.setThreshold(30);
    finder.setFindHoles(true);
    // wait for half a second before forgetting something
    finder.getTracker().setPersistence(5);
    // an object can move up to 32 pixels per frame
    finder.getTracker().setMaximumDistance(32);
    
    auto possibleCameras = camera.listDevices();
    int camRef = 0;
    for(int i = 0; i < possibleCameras.size(); i++){
        if(possibleCameras.at(i).deviceName == "HD USB Camera"){
            camRef = i;
        }
    }
    camera.setDeviceID(camRef); //this is the USB camera
    //camera.setPixelFormat(OF_PIXELS_BGR);
    camera.setup(WHD, HHD, true);
    
    userMask.allocate(WHD, HHD); //this is the color image
    grayImage.allocate(WHD, HHD);
    grayBg.allocate(WHD, HHD);
    grayDiff.allocate(WHD, HHD);
    
    bLearnBakground = false;
    showGui = false;
    threshold = 30;
    
    myMaskImg.load("mask.png");
    
    fbo.allocate(WHD, HHD);
    maskFbo.allocate(WHD, HHD);
    
    //Shader area
    string shaderProgram = STRINGIFY(
                                     uniform sampler2DRect tex0;
                                     uniform sampler2DRect maskTex;

                                     void main (void){
                                         vec2 pos = gl_TexCoord[0].st;
                                         
                                         vec4 vid = texture2DRect(tex0, pos);
                                         vec4 mask = texture2DRect(maskTex, pos);
                                         
                                         vec4 color = vec4(0,0,0,0);
                                         color = mix(color, vid, mask);
                                         
                                         gl_FragColor = color;
                                     }
                                     );
    
    shader.setupShaderFromSource(GL_FRAGMENT_SHADER, shaderProgram);
    shader.linkProgram();
    
    // Let's clear the FBOs
    // otherwise it will bring some junk with it from the memory
    fbo.begin();
    ofClear(0,0,0,255);
    fbo.end();
    
    maskFbo.begin();
    ofClear(0,0,0,255);
    maskFbo.end();
    
    // Texture with alpha
    maskFbo.begin();
    ofClear(255, 255, 255, 0);
    myMaskImg.draw(0, 0, WHD, HHD);
    maskFbo.end();

//load the background
    bg.load("myBackground.png");
    grayBg.setFromPixels(bg.getPixels());

The update is this one:

camera.update();
    
    // MULTITEXTURE MIXING FBO
    fbo.begin();
    ofClear(255, 255, 255, 0);
    shader.begin();
    // Pass the video texture
    shader.setUniformTexture("tex0", camera.getTexture() , 1 );

    // Pass the mask texture
    shader.setUniformTexture("maskTex", maskFbo.getTexture() , 4 );
    maskFbo.draw(0,0);
    
    shader.end();
    fbo.end();
    
    ofPixels pixels;
    pixels.allocate(WHD, HHD, OF_IMAGE_COLOR);
    fbo.readToPixels(pixels);
    result.allocate(WHD, HHD, OF_IMAGE_COLOR);
    result.setFromPixels(pixels);
    result.setImageType(OF_IMAGE_COLOR); //important!
    
    userMask.allocate(WHD, HHD);
    userMask.setFromPixels(result.getPixels());
    
    grayImage = userMask;
    
    if (bLearnBakground == true){
        grayBg = grayImage;//copys the pixels from grayImage into grayBg (operator overloading)
        bLearnBakground = false;
        bg = grayBg.getPixels();
        bg.save("myBackground.png");
    }
    
    // take the abs value of the difference between background and incoming and then threshold:
    grayDiff.absDiff(grayBg, grayImage);
    grayDiff.threshold(threshold);
    
    // find contours
    finder.findContours(grayDiff);

Actually, it works well. but it’s expensive, from 60fps is slowing down to 30fps.
Perhaps the translation from FB to pixels is heavy.
Hence, what is the best way to filter out part of the camera input without losing half of the frame rate in the process?

Hey @Dazzid_of , one thing you could do is to make pixels a member of ofApp, allocate it once in ofApp::setup(), and then use it over and over again in ofApp::update(). Allocating an ofPixels object each cycle is probably slowing things down a bit.

The same is true with userMask (an ofTexture or ofFbo perhaps?), though it looks like its already a member of ofApp.

This would likely run really really fast in a shader (or maybe two). I often use a “ping-pong” approach with 2 ofFbo and multiple shaders in ofApp::update(). This involves reading from “one” and writing to “the other” with 1 shader, and then reading from “the other” and writing to “one” with the next shader, etc. std::swap seems to work well with ofFbo.

I think openCV can have some hardware acceleration, but shaders have always run much faster for me for these kinds of things.