Difference keying live webcam video

Hi guys,

I programmed the following setup in OF.

-Background video is playing.
-On top of the video I show a live feed from a camera. The live camera image is masked by using a static capture from the webcam. (difference keying). The image is RGBA, so the masked pixels are transparent and showing the video that’s underneath.

The following code is working properly, but it is too slow. It only plays at 12fps (and memory is leaking). It would be great if anyone would like to take a look at the code and tell me how it can be improved.

I have the feeling that my final masking step can be done much quicker by using ofxCV functions. I only don’t know which ones.


Also the keying result is not briljant. Suggestions to improve the keying quality are also welcome! See attachment.


cheers, Bart

void testApp::setup(){  
	// setup video dimensions  
	videoWidth = 320;  
	videoHeight = 240;  
	outputWidth = 320;  
	outputHeight = 240;  
	// masked image ofTexture  
	// videograbber init  
	// background quicktime  
	// video source  
	// grayscale source  
	// static difference image  
	// difference (mask) between grayscale source and static image  
	bLearnBakground = true;  
	threshold = 80;  
void testApp::update()  
    bool bNewFrame = false;  
	bNewFrame = vidGrabber.isFrameNew();  
	if (bNewFrame)  
		colorImg.setFromPixels(vidGrabber.getPixels(), outputWidth,outputHeight);  
	    grayImage = colorImg;				  
		// learn new background image  
		if (bLearnBakground == true){  
			grayBg = grayImage;	// the = sign copys the pixels from grayImage into grayBg (operator overloading)  
			bLearnBakground = false;  
		// take the abs value of the difference between background and incoming and then threshold:  
		grayDiff.absDiff(grayBg, grayImage);  
		grayDiff.blur( 3 );  
		// pixels array of the mask  
		unsigned char * maskPixels = grayDiff.getPixels();	  
		// pixel array of webcam video  
		unsigned char * colorPixels = colorImg.getPixels();  
		// numpixels in mask  
		int numPixels = outputWidth * outputHeight;  
		// masked video image (RGBA) (final result)  
		unsigned char * maskedPixels = new unsigned char[outputWidth*outputHeight*4];  
		// loop the mask  
		for(int i = 0; i < numPixels; i+=1 )  
			int basePixelRGBA = 4 * i;  
			int basePixelRGB = 3 * i;  
			// compose final result  
			maskedPixels[ basePixelRGBA + 0 ] = colorPixels[basePixelRGB]; // take pixels from webcam source  
			maskedPixels[ basePixelRGBA + 1 ] = colorPixels[basePixelRGB+1]; // take pixels from webcam source  
			maskedPixels[ basePixelRGBA + 2 ] = colorPixels[basePixelRGB+2]; // take pixels from webcam source  
			maskedPixels[ basePixelRGBA + 3 ] = maskPixels[i]; // alpha channel from mask pixel array  
		// load final image into texture  
		maskedImg.loadData(maskedPixels, outputWidth,outputHeight, GL_RGBA );  
void testApp::draw(){  
	// draw bg video  
	// draw masked webcam feed  
	// info  
	char reportStr[1024];  
	sprintf(reportStr, "bg subtraction and blob detection\npress ' ' to capture bg\nthreshold %i (press: +/-)\n, fps: %f", threshold, ofGetFrameRate());  
	ofDrawBitmapString(reportStr, 20, 600);  

Thanks Zach! Much appreciated. I allocated the pixel-arrays in the setup and now the memory leak is gone. Next step is improving the keying algorithm.