How to normalize kinect's depthMap

Hi every body,

I just saw the tracking quality of Theo’s last project using the kinect

http://vimeo.com/16985224

Any one has an ideas about how to normalize this depth map
Reducing the number of grey layers.

Or just remove all these little black points…

Due to what exactly???
I don’t track refractive elements
There is not a problem of light… We’re using IR…

personally i put the distance data inside an IplImage and segment it the OpenCV way

Can you be more precise on the use of OpenCV.
I am very brand new to OF

first of all: this is only one way to do it; depending on your final goal other methods could make trick better.

depth data come out of ofxKinect in 3 flavors: a grayscale texture, the corresponding unsigned char * depth pixels and a float * array containing the distances in cm; if you want to segment your distances using openCV you’ll want to use the 2nd or the 3rd.

you can create an ofxCvGrayscaleImage and fill it with the unsigned char* pixels using setFromPixels(); then you can cut off the unwanted “grays” unsing a threshold or the inRangeS() function. If you need more precision you can load the float * distance pixels into an ofxCvFloatImage and use the same technique to select the cm range you need.