Projector-kinect calibration

In the mcbth video, at 1:40, the flames are generated using ofxFX (i believe). they start at calibrated positions, but the flames do not exactly follow the silhoutte as they weren’t ment to be only inside the contour. So, in short, the fitting is OK and generally should be like the calibration video.

Sorry, as of yet no intention to switch to 008 or QC. But check out ofxReprojection; it is better maintained then mcbth & easier to compile then ofxCamaraLucida. Its a bit of a hassle to setup (no matter what system) but take your time for it; understanding how the calibration works really helps (check the first link in the top post).

Hi Kj1 and others,

First, than you for your nice work, this was exactly what I was looking for.

I started a project with oF 0.8.4 using ofxKinect as kinect backend but wanted to try out your calibration stuff, so I’ve made some changes for support of this set-up (ofxSecondWindow use, and ofxKinect wrapper with an example project, plus some minor changes).

Juste created a pull request on the original repo ( ), I hope everything is fine (it works super for me).

Next step is Kinect 2, right ? :smile:

1 Like

Not sure how late I am, but the Kinect2 repo can be found here: .
I still dont have a projector laying around to test the calibration, but as far as reading the Kinect goes, everything seems to work as expected.

How hard would it be to support both ofxKinect (with libfreenect backend) for Kinect 1 and the ofxKinectV2 (with MS SDK for the KinectV2) on a single project? I have to ask this because ofxKinectCommonBridge and RGBDCamCalibWrapper are completly different for a simple “merge”.

Btw, @miu, Im using openFrameworks 0.8.x with VS2013 on both projects. Got it working from here: OpenFrameworks And VS2013?

Last question, @KJ1, are you still mantaining this project?

Thank you for your amazing contribution!

The V2 branch is very unstable and only works with my fork of ofxKinectV2 on windows - I had lots of issues getting kinectV2 to run… Basically its there as an example and you should avoid using this for serious projects : ) But, when you get it running, it works very well and the lower delay of the kinectV2 is noticable.

Anyways, I’m not maintaining the projects but when OF0.9 gets released (hopefully with an platform independant ofxKinectV2?), i might take another shot at this.

FYI, RoomAliveToolkit does Kinect v2 calibration and outputs matrices in an XML file:

I’m doing Kinect v2 projection mapping, but you have to apply both depthToColor matrix and projector extrinsic matrix and there is a weird scaling going on:


@micuat that’s awesome! I was going to scratch my head all day in how to use the matrices the room alive toolkit produces but your app is so easier to “digest” than the sample that comes with the library. So many thanks!

Now that I got a Projector, the chessboard drawing just isnt appearing! Not at the SecondWindow or the DebugWindow. Have this happened to any of you before? How did you fix it? Ive put some printfs through the code and everything seems to work as expected, but not even one chessboard is being drawn.

Thanks for the example! @micuat @silverbahamut I am having trouble understanding a few parts of the code + calibration process. I am trying to translate a point in depth coordinate space into projector coordinates, using the calibration from the RoomAlive Toolkit

What I (think I) understand from your code is:

  1. load camera intrinsics from xml file into into openCV calibration

proCalibration.setup(proIntrinsics, proSize);

  1. create camera/projector transformation:

proExtrinsics = cameraToDepth.t() * depthToColor.t() * proExtrinsics.t();

Use the transformation to map depth to projector

  1. Map depth point to camera/world space:

kinect.getDepthSource()->coordinateMapper->MapDepthPointToCameraSpace(depthPoint, depth, &cameraPoint))
trackedTips.push_back(ofVec3f(-cameraPoint.X, -cameraPoint.Y, cameraPoint.Z));

  1. use transformation matrix on glContext:

glMultMatrixd((GLdouble*)proExtrinsics.ptr(0, 0));

  • where/how should the Camera space point be drawn in order for it to work with the transformation?
  • Is there a way to use the matrices to directly get the coordinates in projector space for a given camera space point?


Hi Olivia,

I confess that actually I still don’t know how to use the calibration data from RoomAlive. I tweaked the camera matrix a lot but never had a perfect result.

I remember @genekogan is doing some Kinect V2 projector calibration stuff. Probably this repository:

so you might want to take a look.

once you set the transformation matrix by

glMultMatrixd((GLdouble*)proExtrinsics.ptr(0, 0));

any drawing should be mapped to the projector space.

glMultMatrixd will magically warp camera space points to projector space. But if you want to do it manually, something like

Mat projectorSpacePoint = proExtrinsics * cameraSpacePoint;
projectorSpacePoint /=;
ofLogVerbose() << "x :" << << "y :" << << "z :" <<;

should work (not tested).

Thanks! that helps a lot. will try out the matrix multiplication.

@genekogan I tried adapting the ofxKinectProjectorToolkitV2 (i think only works on mac right now) to use ofxKinectForWindows2 by @elliotwoods but am getting strange results. My understanding is that the calibrate() function works indpendently of the kinect library used, accepting a series of point pairs, where each projector point corresponds to a world point from the perspective of the Kinect. I adapted the getWorldCoordinateAt function to use kinect V2, but the calibration results are really sporadic (i.e. sometmes the point is wavering a huge amount or on the opposite side of the projector output, wheras other times it seems close)

This is the code I am using for getWorldCoordinateAt(), in case someone sees an error or there is a different way of doing this:

void ofApp::addPointPair() {
    ofFloatPixels colorPoints;
	for (int i=0; i<cvPoints.size(); i++) {
        int index = ((int)cvPoints[i].y*1920 + (int)cvPoints[i].x)*3;
		float depthPoint = colorPoints[index+2]; 
		float xCoord = colorPoints[index];
		float yCoord = colorPoints[index+1];
		ofVec3f worldPoint(-xCoord, yCoord, depthPoint); //kinect images are automatically mirrored, so real point should be -x
		ofLog() << worldPoint;
		if (depthPoint > 0) {

actually i found this was happening for me too and i couldn’t figure out why – so i think you are probably using it right. i think it has to do with the depth image and rgb image not being the same size in KinectV2 and it querying points sporadically one pixel off where it should be. i’m not sure i’m right about that though. unfortunately i won’t have much time to look at it in the next month :frowning: i’m hoping when we get orbbec support this solves all our problems.

I’m New here so my question may be a little Noobish .well everything must begin somewhere.
here is my question :slightly_smiling:

  1. i’m trying to use your code in visual studio 2015 but can not if you can explain a bit further it would be great
  2. your prepositions and codes is all over the place (discussion) so as a new member i’m having hard time organizing my thought
  3. is there a possibility for the code to work for kinect v1 (xbox 360) ?!

thx and great works by the way :smile:

@genekogan @rhizomaticode

I adapted genekogans ofxKinectProjectorToolkit to use the MS Kinect SDK, but have the same issues: the results are sporadic and the test point moves all over the place. Did you have any success resolving the issue?

@herzig did you adapt the original ofxKinectProjectorToolkit or ofxKinectProjectorToolkitV2? if it’s the former, that helps pin the problem to the SDK (was wondering if i screwed something up with ofxKinectProjectorToolkitV2). i think the problem is it’s missampling the depth sporadically when the contour pixel doesn’t actually lie inside the depth image and just samples something on the outside.

I adapted the ofxKinectProjectorToolkit, I wasn’t even aware that there is a V2 version :wink: I also suspect the problem to be something in the color-to-world mapping. If I draw the Color-to-World map returned by SDK method ICoordinateMapper::MapColorFrameToCameraSpace method (world coordinates = camera space coordinates in SDK lingo), the mapping looks pretty nice and smooth in the target area. Here’s a picture:

Absolute Camera (world) coordinates X,Y,Z are mapped to R,G,B values.

I know the projector setup is not nice but the problem is the same with a proper setup, and still the mapping seems to be smooth.

Here are some values that im getting (color/world/projected coordinates)
the world coordinates (meters) don’t have that much noise.

col:511, 151 world: -0.353946, 0.713826, 2.851 proj: 0.435593, 0.53031 col:511, 151 world: -0.353946, 0.713826, 2.851 proj: 0.435593, 0.53031 col:511, 151 world: -0.353946, 0.713826, 2.851 proj: 0.435593, 0.53031 col:511, 151 world: -0.354095, 0.706359, 2.852 proj: 0.393372, 0.679994 col:511, 151 world: -0.354095, 0.706359, 2.852 proj: 0.393372, 0.679994 col:511, 151 world: -0.354219, 0.706606, 2.853 proj: 0.275243, 1.11467 col:511, 151 world: -0.354219, 0.706606, 2.853 proj: 0.275243, 1.11467 col:511, 151 world: -0.35397, 0.706111, 2.851 proj: 0.421681, 0.575829 col:511, 151 world: -0.354319, 0.714577, 2.854 proj: 0.297167, 1.06408 col:511, 151 world: -0.354319, 0.714577, 2.854 proj: 0.297167, 1.06408 col:511, 151 world: -0.354591, 0.707349, 2.856 proj: 0.519023, 0.217637 col:511, 151 world: -0.354591, 0.707349, 2.856 proj: 0.519023, 0.217637 col:511, 151 world: -0.354591, 0.707349, 2.856 proj: 0.519023, 0.217637

especially this seems pretty weird to me:
col:511, 151 world: -0.354095, 0.706359, 2.852 proj: 0.393372, 0.679994
col:511, 151 world: -0.354219, 0.706606, 2.853 proj: 0.275243, 1.11467

the world point moves less than a millimeter off but the projection seems to explode.

Anyway I’ll dive into it some more, any hints or ideas are appreciated. Could it really just be that my calibration is that bad?