limiting blob detection area with opencv and kinect

hi,

I’m doing some experiments with Kinect, where I want to do a blob detection on the depth image and get the centroid on the detected blob. So I modified Theo’s ofxKinect example. Because I want to work with 2 Kinects, I figure that I may only want to draw the thresholded depth image in a smaller size, just to save memory. However, I found that I can only do blob detection and display the resulting image in the native kinect resolution (640x480). I’m pretty sure there’s a way on how to do this limiting the blob detection area, only thing is I can’t find the way to do it.

Here’s part of the code that I use

  
void testApp::draw() {  
		// draw from the live kinect  
		kinect.drawDepth(10, 10, 400, 300);  
		//kinect.draw(420, 10, 400, 300);  
		  
		grayImage.draw(420, 10, 640, 480);  
		//contourFinder.draw(420, 10, 400, 300); -> can draw 400x300 image  
		  
		/*ofFill();  
		ofSetHexColor(0x333333);  
		ofRect(420, 10, 400, 300);  
		ofSetHexColor(0xffffff);*/  
		  
		for (int i = 0; i < contourFinder.nBlobs; i++){  
			contourFinder.blobs[i].draw(420, 10); //but this one can't do 400x300  
			ofxCvBlob blob = contourFinder.blobs.at(i);   
			point = blob.centroid;  
			pointX = point.x;  
		}  
}  

anybody has any idea on how to pull this off?

thanks

I am not sure I exactly get what you mean. Are you wanting to draw the Kinect, grayImage and the blobs as one image scaled down?

Or are you looking to crop the area in which you are looking for blobs/depth data ??