kinect depth range

Hi I’m using a kinect in a project where I’d like to capture the whole body. This means the subject must be fairly far from the kinect. At this distance I don’t get many levels of depth information, and the resulting models end up with very noticable “stairs” or discrete levels of depth. Is there any way to tell the kinect that I don’t care about anything above or below certain depth values so that it can give me the full range of values over a specific range of physical space? For example, if the default sensor range is something like 50-400cm, can I change that to be 250-400cm and have all 255 levels of depth within that range? Is this possible?

I am using Ubuntu 11.10 and theo’s ofxKinect, I haven’t tried any other addons for kinect.

setClippingInMillimeters should do what you want

kinect.setDepthClipping(bDepthClippingNear, bDepthClippingFar);

works for me, i get a maximum distance of almost 8 meters, even though the quality of the image deteriorates.
in that case it helps to push the near clipping plane as far away as possible.
i was looking into keeping the depth-map as floats, but using the ofxKinect class methods and the depthMap in chars is enough for me.



Thanks guys, this seems to be doing exactly what I needed. One more question, does the function getWorldCoordinateFor(int x, int y) return the position in meters from the kinect, then?

Upon closer inspection it is not what I need. For example I can set the near distance to 325 and far to 400 but then I only end up getting maybe 20 ‘slices’ of information at discrete depth intervals.

i added this on line 487 of ofxKinect.cpp:

printf("%d %d\n", i, depthLookupTable[i]);  

it shows you the actual table that is used to convert floats to chars in your chosen depth range.
I don’t know what your problem is, I get 256 steps inside my chosen range. are you certain your ranges are correct?

the function getWorldCoordinateFor(int x, int y) works ands gives me depth in mm at the image-position indicated, but since i’m trying to extract contour information, it often fails, if the contour point is not a raw pixel inside(on the edge the blob, but smoothed out in any way.



After looking into this more, it seems that since I’m using getWorldCoordinateFor(), the clipping planes don’t even apply. They are only used in converting the 11bit (0-2047) raw depth range into the 8bit (0-255) depth image. I guess I was expecting the setClippingInCentimeters to somehow tell the kinect to recalibrate its depth sensor to completely ignore anything outside of those ranges and map near to 0 and far to 2048, regardless of what those values are. This is not how it works though. I’m not sure if that is even a possibility, but I’ve always assumed it was for some reason.

So my problem is coming from the fact that the Kinect’s depth measurement as a function of distance is not linear
It appears to follow a logarithmic scaling. This means that the further you get from the sensor the less detailed depth information you’ll get about objects.

This is evident if you print out the equivolent raw depth value of distances between 325 and 400, I only get 19 unique values.

So it appears as though I am SOL.

yes if you were using the 8bit depth image this would help, since you are using resolution and clipping will make the depth image to use those 8bits for only that range. but getWorldCoordinate uses the raw info from the kinect and the limitation is from the same hardware