Difference keying live webcam video

Hi guys,

I programmed the following setup in OF.

-Background video is playing.
-On top of the video I show a live feed from a camera. The live camera image is masked by using a static capture from the webcam. (difference keying). The image is RGBA, so the masked pixels are transparent and showing the video that’s underneath.

The following code is working properly, but it is too slow. It only plays at 12fps (and memory is leaking). It would be great if anyone would like to take a look at the code and tell me how it can be improved.

I have the feeling that my final masking step can be done much quicker by using ofxCV functions. I only don’t know which ones.


Also the keying result is not briljant. Suggestions to improve the keying quality are also welcome! See attachment.


cheers, Bart

void testApp::setup(){  
	// setup video dimensions  
	videoWidth = 320;  
	videoHeight = 240;  
	outputWidth = 320;  
	outputHeight = 240;  
	// masked image ofTexture  
	// videograbber init  
	// background quicktime  
	// video source  
	// grayscale source  
	// static difference image  
	// difference (mask) between grayscale source and static image  
	bLearnBakground = true;  
	threshold = 80;  
void testApp::update()  
    bool bNewFrame = false;  
	bNewFrame = vidGrabber.isFrameNew();  
	if (bNewFrame)  
		colorImg.setFromPixels(vidGrabber.getPixels(), outputWidth,outputHeight);  
	    grayImage = colorImg;				  
		// learn new background image  
		if (bLearnBakground == true){  
			grayBg = grayImage;	// the = sign copys the pixels from grayImage into grayBg (operator overloading)  
			bLearnBakground = false;  
		// take the abs value of the difference between background and incoming and then threshold:  
		grayDiff.absDiff(grayBg, grayImage);  
		grayDiff.blur( 3 );  
		// pixels array of the mask  
		unsigned char * maskPixels = grayDiff.getPixels();	  
		// pixel array of webcam video  
		unsigned char * colorPixels = colorImg.getPixels();  
		// numpixels in mask  
		int numPixels = outputWidth * outputHeight;  
		// masked video image (RGBA) (final result)  
		unsigned char * maskedPixels = new unsigned char[outputWidth*outputHeight*4];  
		// loop the mask  
		for(int i = 0; i < numPixels; i+=1 )  
			int basePixelRGBA = 4 * i;  
			int basePixelRGB = 3 * i;  
			// compose final result  
			maskedPixels[ basePixelRGBA + 0 ] = colorPixels[basePixelRGB]; // take pixels from webcam source  
			maskedPixels[ basePixelRGBA + 1 ] = colorPixels[basePixelRGB+1]; // take pixels from webcam source  
			maskedPixels[ basePixelRGBA + 2 ] = colorPixels[basePixelRGB+2]; // take pixels from webcam source  
			maskedPixels[ basePixelRGBA + 3 ] = maskPixels[i]; // alpha channel from mask pixel array  
		// load final image into texture  
		maskedImg.loadData(maskedPixels, outputWidth,outputHeight, GL_RGBA );  
void testApp::draw(){  
	// draw bg video  
	// draw masked webcam feed  
	// info  
	char reportStr[1024];  
	sprintf(reportStr, "bg subtraction and blob detection\npress ' ' to capture bg\nthreshold %i (press: +/-)\n, fps: %f", threshold, ofGetFrameRate());  
	ofDrawBitmapString(reportStr, 20, 600);  

this line

 unsigned char * maskedPixels = new unsigned char[outputWidth*outputHeight*4];  

and any other place you see NEW without a proper Delete would both be a memory leak and make the system slow. you are asking the computer to allocate X number of bytes per frame.

if the size of your image buffer (array of pixels) is not changing, better to do this in the setup, ie:

in testApp.h file:   
 unsigned char * maskedPixels ;  
in testApp::setup():   
maskedPixels = new unsigned char[outputWidth*outputHeight*4];  

does that make sense?

you can do new and delete, but it’s much faster if you just allocate in setup as opposed to allocating every frame (for things that don’t change in size, for example).

take care,

Thanks Zach! Much appreciated. I allocated the pixel-arrays in the setup and now the memory leak is gone. Next step is improving the keying algorithm.

I’m a complete C++ noob. Any chance you could share the missing header file and any improvements made to the example file?

It’s the “testApp.h” which is in the “MyFirstProject” folder. You should probably read the tutorials if you are new.

Thanks .After 2 hours of googling down to 2 errors.

No member named ‘idleMovie’ in ‘ofVideoPlayer’

No member named ‘grabFrame’ in ‘ofVideoGrabber’

My complete header file

#pragma once

#include "ofMain.h"
#include "ofxOpenCv.h"

class ofApp : public ofBaseApp{

		void setup();
		void update();
		void draw();
		void keyPressed(int key);
		void keyReleased(int key);
		void mouseMoved(int x, int y );
		void mouseDragged(int x, int y, int button);
		void mousePressed(int x, int y, int button);
		void mouseReleased(int x, int y, int button);
		void windowResized(int w, int h);
		void dragEvent(ofDragInfo dragInfo);
		void gotMessage(ofMessage msg);
		int videoWidth;
		int videoHeight;
		int outputWidth;
		int outputHeight;
        int threshold;
		ofVideoGrabber vidGrabber;
		ofVideoPlayer vidPlayer;

        ofxCvColorImage colorImg;
		ofxCvGrayscaleImage grayImage;
		ofxCvGrayscaleImage grayBg;
		ofxCvGrayscaleImage grayDiff;
		ofTexture maskedImg;
        bool bLearnBakground;

After checking discoveredI need to change both 'idleMovie’and 'grabframe’to update.