Passing raw Kinect depth data to a shader

I am using ofxKinectForWindows2 to interface with a kinectv2 device. It has a function to return an ofPixels object containing depth data for the 512x424 resolution depth image, with raw values in mm, i.e. ranging from 0 to about 4500.

My question is how to pass this array to a shader, without losing or truncating any of the information. There is a function to return a depth texture from the kinect device, but it compresses the data to a 0-255 range (and doesn’t seem to take full advantage of that range).

Further context:
I am trying to do blob detection. I could perform operations on the array directly in the app, but it’s slow to generate the entire value-corrected image by looping through the depth pixels and using ofImage.SetColor(x, y, value);

It’s my old code and I don’t know if it works but you can do something like this in fragment shader:

orgPos.xy = (gl_TexCoord[0].st - vec2(512/2, 424/2));
orgPos.z = texture2DRect(kinectDepthStream, gl_TexCoord[0].st).r;
orgPos.y *= -1;

orgPos.z *= 65536.0; // originally unsigned short???
// camera model
orgPos.xy *= orgPos.z / 370.0;

if you are trying to do blob detection try using opencv should be faster than having to download the image from the gpu to do the blob detection later.

but in any case you can just do:

ofTexture tex;

tex.allocate(kinex.getRawDepthPixels());

tex.loadData(kinect.getRawDepthPixels());

i’ve made up the name of the getPixels method not sure what’s the real name

1 Like