ofxKinect getDepthPixels vs getRawDepthPixels? And thoughts on point clouds.

I am unsure what the difference between these two functions is? When using getDepthPixels I get “clean” values sort of speak when I am drawing them out but when using getRawDepthPixels I have no idea what I am looking at. Can someone please clarify this for me?

Also I was wondering when drawing out a point cloud using the ofxKinect (as seen in the example) it seems to use the getRawDepthPixels. Is the information simply more detailed here than the standard getDepthPixels? Also can I somehow get that detailed information when using getDepthPixels from the ofxKinect? Or am I just totally lost here?

If you look at the type of data output by the two function, you will see that one is a pointer to an array of char, while the other one is an array of short :

unsigned char 	* getDepthPixels();		// grey scale values  
unsigned short	* getRawDepthPixels();	// raw 11 bit values  

The second one is more precise indeed (a short is two bytes of data, versus one for a char).
The raw data is what is outputed from the kinect sensor, which is roughly the distance in mm from the sensor.
The depth pixels value gives you a scaled down version of that so it “fits” within a char (if I remember correctly, they threshold the raw data to remove the points closer than the near clipping plane and the points farther away from the far clipping plane and then map the remaining values to a 0-255 range)

Thanks for clarifying that! Helps a lot.