ofxKinectV2-Osc Real World Coordinates


Hey all
I am using ofxKinectV2-osc to get KinectV2 on my mac.
I am a bit confused by the coordinates i recive. How can i get the real world coordinates of the skeletal points?

      ofVec2f userHandLeftPoint = skeletons->at(i).getHandLeft().getPoint();

seems to get my some coordinates between -1 and 1. how to convert them to either screen coordinates or better, real world coordinates?
Any hints to some documentation about it?


Ok, that was sort of stupid. i think, the coordinates i get are the realworld coordinates in meters seen from the kinect’s point of view, right?
So how to map them properly to the screen in 2D? Any hints?
As how i see it, the exemple just interpretes the coordinates as pixels and everything is just in the top left corner just some pixels away…


What do you mean exactly? If you are using the colour image of the kinect and wanted to draw something over the left hand point there are some coordinate mapper functions inside the kinect SDK. I am guessing that is what you mean by mapping them properly on the screen except because you don’t have the kinect camera in your project, just joint coordinates, this much accuracy is not really important right? If it is you can either find a windows machine and recompile the ofxKinectV2-osc application with a coordinate mapper function, but it would only be correct relative to the image in the kinect that you dont have.

You could also look at the libfreenect based addons, they have also implemented the coordinate mapper fucntions (one such addon is this https://github.com/ofTheo/ofxKinectV2) You could try to just get that function working with the data you have.

Or if accuracy and relation to the kinect image is not important you can just experiment with mapping the data between pixel values using ofMap, or generating some kind of homography matrix using openCV for a more complex relationship between the coordinate system.


Hey fresla, thanks for your quick answer.
I recognize now, that i thought very imprecise about the problem.
What i have is a theatre stage a projector and a kinect. the projector is as far away as it lights up the whole stage. the kinect though cant be that far away, so it only sees part of the stage.
what i do want to do is to attract objects to -for example- a hand of the performer.
i think, i have to map the kinect coordinates somehow to the stage coordinates. and this system should be again projected somehow to the projection screen.
plan is: setting up sort of a 3d scene for the stage, the kinect, the projection surface and the projector and then sending a ray from the projector through the lefthand point and get the intersection of this ray with the projection surface. this gives me then the screencoordinate for the left hand.
i think, it should be possible like that but maybe i am inventing a problem here and there is a simpler solution for that?


There are some addons for kinect projector calibration, it is not such an easy task.

These are both pretty old, I tried them some time ago but did not have time to make them work.


Thanks a lot for your effort!
The main problem i have is, that i need skeleton tracking with the v2 and cant get it to work on osx. so this is why i use ofxKinectV2-OSC to get skeleton data to my osx machine. But as far as i can see, with that setup, i loose the ability to do this sort of calibration. I need no masking and stuff but just the skeleton point in screenspace. So i either switch to windows completely or do some crazy transformations as described above.

Now for something completely differtent:
maybe i dont get something right here, but the example included in ofxKinectV2-OSC gives back realworld coordinates. so i cant render the sceletton the way it is rendered on the windows machine. the example just draws everything 1-3 pixels away from 0/0 but in the readme the screenshot is looking like the skeleton on the windowsmachine. it is somehow related /translated to window space. i know, this is not what i need right now, but it still could be helpful to debug. how to get the same coordinates as like the window app skeletton is drawn? thanks a lot for your help!


You can get something like the coordinates scaling them using the ofMap function, or even just multiply the incoming coordinates. I am guessing the kinect osc software is giving z in only positive numbers (meters from the camera, y will also be positive, meters from the ground, and x will be negative and positive, meters from the center.

Even in a basic way if you mulitply the coordinates by 500 they will start to fill yout screen, you will just have to translate along the x axis to take care of the negative values.


Hi michif,

about your post above (“i think, it should be possible like that but maybe i am inventing a problem here and there is a simpler solution for that?”):

I indeed think there is a much simpler solution. In settings like you have (as far as I understand) I personally omit the real 3D situation (stage) as it will get quickly complicated and even might be imprecise (especially when projector place and optics(!) change). And also it is uneccessary (again: as far as I understand…):

I have 3D coordinates of the Kinect. I then draw the object in an abstract 3D space (which means virtually no calculations needed, only scalings to fit the size) to an FBO (which actually makes it 2D). The scaling of z might be a different one rom x and y to get the right depth feeling, but this is easily done. The FBO is what I project.

To get it mapped on the stage I use the brillant ofxBezierWarp by Matt Gingold (or any other mapping tool, even external ones, if you use Syphon/Spout).

So I do not have to do heavy calculations but just do the mapping by sight.
The connections from Kinect 3D space to stage 3D space are this way done implicitly and intuitively. Also any kind of distortion (contortion?) in projector optics can be corrected easily.

hope this helps.
Please just ask, in case you want to go this way and have questions.

have a good day!