ofxKinect depth and some other problems...

Hi all!
I’m trying to use kinect for my first time, and… i find some problems.
I don’t know if i write right title of the post, but i don’t know what i can write.
So… those are my problems:

  • when i try the example project, if i observe original RGB image and depth image, i see something strange. In the depth image i see two finger… but i move only one finger. :open_mouth:

  • Other “problem” is that when i use kinect, the libusc print in output a lot of info every frame. Is it normal?
  • i search around, but maybe not well, what kind of data i receive from getDepthPixels(). Ok, i see i receive a pointer to unsignerd char, but i don’t understand the max and min value of the grayscale. I think is 0 and 1, but i’m not sure. And… what about pixel where distance is too far?
    I try to set red pixel when depth value is >= .1 but… if you see my result, it doesn’t seem right and i have some problem with my fingers again. :slight_smile:
    Here the picture and the code:

void testApp::setup(){  
	image.allocate(myKinect.width, myKinect.height, GL_RGB);  
void testApp::update(){  
		unsigned char *grayPixels = myKinect.getDepthPixels();  
		unsigned char *rgbPixels = myKinect.getPixels();  
		unsigned char newPixel[myKinect.width * myKinect.height * 3];  
		for (int i = 0; i < (myKinect.width * myKinect.height); i += 1){  
			if(grayPixels[i] < .1)  
				newPixel[3 * i] = rgbPixels[3*i];  
				newPixel[(3 * i) + 1] = rgbPixels[(3*i) + 1];  
				newPixel[(3 * i) + 2] = rgbPixels[(3*i) + 2];  
				newPixel[3 * i] = 255;  
				newPixel[(3 * i) + 1] = 0;  
				newPixel[(3 * i) + 2] = 0;  
		image.loadData(newPixel, myKinect.width, myKinect.height, GL_RGB);  
void testApp::draw(){  
	ofBackground(0, 0, 0);  
	myKinect.drawDepth(0, 0);  

Some suggest?

Hi Mauro,

I believe that the distance from your hand to your kinect lower than the minimal required, this is the reason for the completly black silhouettes, maybe it causes the getDepth problem…

For the double image, dont worry the reason is that the kinect is a stereo camera ( color camera + IR camera ) … now think in your eyes … when you have something too closer you will see this weird and double en some cases. like kinect.

Try with a little more distance from kinect.

cheers :wink:

Hi Monk!
I have tried with more distance, but i have same problem.
I show you:

Here distance from my hand is about 40cm.
And here, with more distance:

If you look inside the rectangle (added in Photoshop non openCv :)), you can see in the right picture the shape of my finger, but… not all my finger. In the picture where is my finger, there is a “hole”, and in the bottom you can see a piece of my hand (the pink area) and in the top there is my plush (the green area).
Maybe i write wrong code?
(it’s good write here, i can learn oF and i can improve my english, wow! :-))


I think the black spots are normal. This is just the shadow of your hand on your body. Because the Kinect emits infrared light and you hand stops it. This video is pretty good at showing this with an “IR light camera” http://blogs.howstuffworks.com/2010/11/05/how-microsoft-kinect-works-an-amazing-use-of-infrared-light/

But then your hand should not be visible under this black spot. Are you sure you are aligning and scaling the overlay and the video the same way? Are those frames from the same camera update? On the first image it looks like your hand of the video might have a slightly different angle and position than the one on the depth map? But not sure about that one…

Hi underdoeg,
yes, i think that my hand project a shadow, but i don’t know why i see black area and not gray… ok, there is shadow, but distance should be the same of the rest of the area… the same of my body, isn’t it?
About the overlay, i think you mean the red area, i set in the red only pixels where depth is more or equal than .1. If i don’t set red pixel, i see the right image, the same image i could see if i do “myKinect.draw(0, 0)”. So… I set red pixels in the same frame, i think it’s impossibile no to be aligned… but maybe i write the wrong code.

Here i’m again. I’m trying to realize multitouch using kinect… but… i see the same problem. I show you one other image. If for you is normal, i trust in what you say… but i don’t know… is stranger that i don’t see the right shape.

I use the basic example of ofxKinect and i add the last image. I create my image starting from “getRawDepthPixels”. If value is under 500, i set white pixel. The problem is the same: if you see the first image you can see that distance is not right. Any solution? I don’t find what is the problem.

I use a plexiglass with 5mm depth… maybe my problem could be the light or something about reflection?

And whitout plexiglass, if i try to draw also the area that is more than 500… i have what you see in the last image:

if someone want to check, here is the code: