black parts background subtration


i’m, having some problems on the removal of the background.

if i have a static background with some black parts in the threshold pixels that were black or near black in my background will show black in the threshold.

imagine if a put my hand in front of a black object, my threshold will show my hand with a hole.

anyone have some ideia to solve the problem?

Thank you.

Are you using the openCV absDiff and threshold method for differencing?

If so, I would consider trying using a colour differencing function… The problem with black and white differencing is that a dark red area might be the same shade as a dark blue area and in black and white would appear as the same colour - even though in colour they clearly are different.

Here is my colour differencing algorithm that works “pretty good”. Only difference from calling the absDiff and threshold methods is that you pass two ofCvColorImage to get your ofCvGrayscaleImage result (and that it’s one function vs two ;)).

void TI_Colour_FrameDifferencer::calculateColourDifference(ofCvColorImage* img1, ofCvColorImage* img2, ofCvGrayscaleImage* result, int threshold){  
	int width = result->width;  
	int height = result->height;  
	int bpp = 3;  
	unsigned char* pixels1 = img1->getPixels();  
	unsigned char* pixels2 = img2->getPixels();  
	unsigned char* resPixels = new unsigned char[width * height]; // single channel - b & w  
	bool pixelDiff;  
	for(int i=0; i < height; i++){  
		for(int j=0; j < width; j++){  
			pixelDiff = false;  
			for(int b=0; b < bpp; b++){  
				int diff = pixels1[(i*width+j)*bpp+b] - pixels2[(i*width+j)*bpp+b];  
				diff = (diff < 0)?-diff:diff;  
				if(diff > threshold){  
					pixelDiff = true;  
			resPixels[i*width+j] = (pixelDiff)?255:0;  
	result->setFromPixels(resPixels, width, height);  
	delete resPixels;  

As for patching holes… this is something I’m working on/experimenting with. So far the best approach I’ve got is that for blobs detected as holes, I recalculate the difference for those areas at a much lower (almost 0) threshold. It’s still not perfect, but getting there. The problem I’m having is that if the “hole” blob isn’t entirely enclosed by another blob, it doesn’t count as a hole (it doesn’t even detect as a blob!), but it just gets removed from the larger blob as a chunk missing from the side.


here is another interesting option:


add in ofxCvGrayscaleImage.h:

void adaptiveThreshold( int value, int neighbor);  

add in ofxCvGrayscaleImage.cpp:

void ofxCvGrayscaleImage::adaptiveThreshold( int value, int neighbor) {  
	cvAdaptiveThreshold( cvImage, cvImageTemp, value,  

and in the app use for example:

gray.adaptiveThreshold(255, 111);  

greetings ascorbin

hi ascorbin,

i just dont know wich order shoud i put your function:

before the normal threshold or
if i forget the other threshod and just use this.

Thank you

hi outdoors
only use just one threshold method. normal or adaptive. for the neigbour values only use odd number greater than 3: 3, 5, 7, 11,… otherwise it will crash.

i could put your adaptative method to work,

i used the one from plong0 but i resolve one problem and i get another,

when i have a black area with other color it works but if it is black with some dark it doesnt :frowning: i’m back to square 0…

yeah, the black on black thing is a problem I’ve been trying to solve… It’s very tough though… because imagine a dark room and someone who is wearing dark clothes in it… even to the human eye, it is tough to distinguish them.

We tried doing some work using an infrared camera… it gives some interesting results, but I don’t think it would be reliable in all cases. Different materials seem to reflect IR very differently. Like I have a black toque that wouldn’t show up against a black background in the full-spectrum capture… but in infrared, it’s a very bright white and shows up nicely against the black background. My eyes on the other hand, appear very different from the background in the full spectrum, but appear as black dots in the infrared capture.

It’s a tough problem we have here. Guess there’s a reason why they use green screens for this type of stuff :stuck_out_tongue:

Our solution (sadly, a pretty specific one and a bit of a hack) for now is that we know the locations where the work will be installed, so we know that the camera will be looking downward towards a grey sidewalk… So then it’s a lot easier to be confident that most people will contrast the background and be detectable.

I have thought about doing something with blob/motion tracking and using the shape/size of the object from past frames to try to fill in the blanks… but it’s hard for me to see how this would work because you never know which frame has the full capture of the person. Not only that, but what if the person changes their shape by turning sideways or putting their arms out?

Anyway, I’d be interested to know about any progress you make on this. I will also keep you updated if I make any big progress.