Trying to project over moving object with kinect

Hello everybody,

I’m trying to calibrate my projector together with my kinect in order to project an image over a moving object (which actually is a sphere).

I’m able to calibrate kinect with projected image on the wall with cvGetPerspectiveTransform, without using the depht. But of course this is working only if I stay close to the wall in which I’m projecting.

As I move away from the wall and towards the kinect, all the calibration is lost because depth is not taken into the parameters.

Please, in this video you can see what I mean (of course, thats awesome, but I guess the maths are the same):

I guess I should do the same thing that with cvGetPerspectiveTransform but taking into acount also the depth.

So, I collected several points in the world captured by the kinect, lets say, (x[sub]1[/sub],y[sub]1[/sub],z[sub]1[/sub]), (x[sub]2[/sub],y[sub]2[/sub],z[sub]2[/sub]) … and find the point in the projection that correspond to those points:

(x[sub]1[/sub],y[sub]1[/sub],z[sub]1[/sub]) -> (Px[sub]1[/sub],Py[sub]1[/sub])
(x[sub]2[/sub],y[sub]2[/sub],z[sub]2[/sub]) -> (Px[sub]2[/sub],Py[sub]2[/sub])

So, if I put my sphere in front of the kinect at point (x[sub]1[/sub],y[sub]1[/sub],z[sub]1[/sub]) and I put a black image on the projector, with only a white pixel at (Px[sub]1[/sub],Py[sub]1[/sub]), it goes straight to the center of the sphere.

And… I’m stuck here :frowning:

I’m lacking some math knowledge… I guess there is some way find a matrix that given a point in the kinect 3D world can convert it to the projector 2D world…

Any help would be really appreciated… I think this would be a very practical code so a kinect and projector can be calibrated together in 3D.


In trying to do something similar I ran across this. You will have to get RGBDemo going as well in order to do the calibration.

the first step is calibrating your projector to the kinect. once you have that, it’s pretty easy to project on a ‘3d point’. elliot and james made some tools here as part of an art&&code workshop i ran with elliot.

Thanks for the replies!

I see that both projects relay on RGBDemo for the calibration of the kinect and the projector.

I could calibrate my kinect with the chessboard, but I’m lost when trying to calibrate the projector together with the kinect.

I guess you should beem the “projector_chessboard.pdf” to a white surface and make the distances between the corners match the distances in the “projector_chessboard_help.jpg”.

I’m not really sure about what to project and how in order to get the calibration. Sorry if this is too simple, I can’t get it.

Could please someone who did this before give me any tip? I would really appreciate it!


There are some videos on Elliot’s site

Here are the notes from the workshop - you will have to dig through the wiki history as it has ben attacked by spammers :frowning:

When I was exploring Elliot’s project a lot of the addons were in development and won’t be compatible with the current ones you find on github. Also some of application of extrinsics that Elliot demos in VVVV hasn’t been implemented in the OF version yet.

If you are on a mac you can checkout my fork as a starting point. I really butchered a lot of the addons in order to get it to compile but it may be useful as a starting point to update it

oh wow, just emailed golan about the wiki spam. thanks for the heads up.


Thanks for your help… I’m now dealing with the CalibrateProjector.v4p.

I can’t manage to load the patches that relays on vvvv.nodes.emguCv.dll or vvvv.nodes.openNI.dll.

I’ve tryed drag & drop the .dll onto the v4p and also add it in the root.v4p without success.

I guess there is something big I’m missing here… I’m new to vvvv :(.


@gizmow I am trying to do the same thing as you and have been looking at the same resources you have been looking at. Recently been trying to get the RGBDemo and Camara-Lucida working together, but have had no success working with the RGB Demo. This is how I have been trying to calibrate my projector to Kinect with RGB Demo. Maybe I am missing a step:

  • open rgbd-viewer

  • grab images of chessboard for projector, projected on the wall behind me

  • once I grab those images I run calibrate_projector

  • It processes then rgb images but then it moves to process IR images but there are none because it’s projector. It can’t see the light.

Crash with ‘calibration.cpp:3401: error: (-215) nimages > 0 in function

I have still not come across, well besides Elliot Woods videos, good procedure for calibrating a projector to kinect.

Please keep me updated on any success on your part and I will too! Thanks!

Sure Heartstring,

I’m not sure about calibrate_projector, it crashes always I try to use it, I compiled RGBDemo and I’m looking into it now.

In order to calibrate the projector, it seems that you have to beem the chessboard onto a white surface which you can move / rotate, so you can map different depth coordinates to the same chess corners, so you have a lot of (x,y,z) points from the kinect that map the same (Px,Py) in the projector… then you feed that information to CalibrateCamera function and it calculates the reprojection matrix, which you can use to map any point from the 3D world to the projector xy coordinates.

It looks pretty nice in the kimchi and chips demo, but I cannot manage to run the vvvv patch as I’m really new to that application.

I’ll keep posting my updates on this!

Still working on this.

I’m still trying to make calibrate_project work with no success, I’m not sure which pattern to use, any of the ones that comes into the /data folder are not working (calibrate_project crashes).

Also exploring the artancode approach, not really sure about the procedure either…


Same here. I have run into the same thing with calibrate_projector. Tried on 3 computers with OSX 10.6.8, 10.7.2, and Windows 7. I have started also looking at the Art and Code examples and try to get them to compile with limited success also. I am now just going to try to write my own code from scratch learning from the code that is out there. Probably learn more this way. I’ll try to write up the steps and procedure as learn them and put them here. Keep trying!!!

Check this link out:
This helped be to get RGBDemo working. I got it working on my OSX 10.6.8 machine. Don’t think RGBDemo will work with Lion though because the libpng library is out of date. RGBDemo uses 1.4.8 and Lion uses 1.5.4. It cause the rgbd-viewer to crash on screen grab. Hope this helps!

Hello heartstring.

I am trying to calibrate my kinect and a projector. But i am confused by the tutorials/calibration, mainly because I have no clue on what to do with the RGBdemo files downloaded here:
I guess i have to compile them somehow.

getting ofxCamaraLucida to compile with OF007 was a breeze.
I dublicated the opencvexample folder, added the right addons, had to rename #include <libusb-1.0/libusb.h> to #include “libusb.h” in a few places.

i found the compiled osx binary here:

now after running rgbd-viewer and grabbing a few chessboard images, i tried running calibrate_kinect_ir. it processes the images and finds the corners but it crashes before generating .yaml file


The easiest way to use the RGB demo is to go to Nicolas has binaries of RGB demo. Download the one for your system and run the binary. Once you have it up and running you need to take screen captures of the chessboards as shown by the Christian’s tutorials. More detail about the screen grab: Once you have rgbd-viewer up and running you will grab one frame of the video from the Kinect by (on my mac) hitting command-g or under File, Grab one frame. It saves a screen capture in the directory grab1 where ever you have rgbd-viewer. Save about 20 - 30 screen captures moving the chessboard around at different angles. From there follow Christian’s tutorial.

Let me know if that works! If you want to compile the rgb demo from source I can help too! Good Luck!
By the way, my last name is Schulz too.

![]( shot 2012-01-29 at 6.07.45 PM.png)

hi heartstring

i guess i am pretty new to this.
when you say “run the binary” you mean the download that includes the source code. because as posted before i did find a compiled version here:

i have no clue on how to get the source code version to run via the terminal.
when using the compiled version i am able to take screen shots with the just fine. (of the large chessboard + projected chessboard)
but when i open the it just crashes.

and the says this:
[0x0-0x1b91b9].calibrate_projector[10319] ASSERT failure in int main(int, char**): /-psn_0_1806777 is not a directory. [context.images_dir.exists()]

i also tried with screen shot of just the 11x8 chessboard, in order to calibrate the kinect. this app opens up fine, processes the corners but crashes at the end without generating any yml files.

any ideas ?


I found that the instructions here ( needed to be modified slightly on a Mac. To run the binary, you need to:

  • open Terminal

  • move to the RGBDemo directory (i.e. type in “cd /Applications/RGBDemo”) - assuming the files were put into the Applications folder

  • run it by typing: “” or use “ --freenect” to use the libfreenect backend

  • you can then run the calibration app “ --pattern-size 0.025 grab1”

  • then run the viewer calibrated “ --freenect --calibration kinect_calibration.yml”

hope this helps,


thanks so much.

this was very helpful.

for the record:

it first did not work.
then i ran the viewer with this command: --freenect
in the app select under the capture menu dual rgb/ir mode

and made sure that the chessboard is always visible in the ir image, and the board is not too close to the camera (i.e. depth camera can’t see board)

when running the

it first shows the images that i took with the viewer + circles and connecting lines
then it shows the image with the transformation applied and circles around the corners
then it shows the ir camera image with circles + lines
then it think it also showed the transformed image
after that i generated calibration_rgb.yaml, calibration_depth.yaml and kinect_calibration.yml

now the challenge is to get to
produce an average pixel error lower then 1.

as mentioned in the camara-lucida site:
““Average pixel reprojection error” This number should be less than 1. If it’s greater than 1 it means the calibration failed for some reason.”

i am running the by calling grab1 --kinect kinect_calibration.yml --output projector_calibration.yml --projector-width 1024 --projector-height 768

terminal output:

init done
[7136.986347573901, 0, 534.681071936976;
0, 6073.042694323907, 476.8111164231698;
0, 0, 1]
[0, 0, 0, 0, 0]
Average pixel reprojection error: 78.4544

here are a few of the images that i am feeding it and the cal results:

i am noticing that some of the corner chessboards were not recognized.

so i will try to take new images with tilted/ angle the board less.

i used the following images:

and got a pretty good error:

init done
[1745.878463506011, 0, 721.8877736828866;
0, 1753.43362195889, 669.8555282126089;
0, 0, 1]
[0, 0, 0, 0, 0]
Average pixel reprojection error: 0.636539

I was surprised as my physical image is not very flat and is just 4 sheets taped together.

i finally got some good numbers:

[1328.955102586346, 0, 455.4493471307599;
0, 1351.618959340305, 1233.348860446746;
0, 0, 1]
[0, 0, 0, 0, 0]
Average pixel reprojection error: 0.427394

here the 10 posses that did it for me:

i took 20 images, tried the, got an Average that was too high, deleted some of the 20 images and repeated that until i got a good Average.

i am not sure what factors cause the Average to change.
(would be nice to make the calibration app in a way that it does a calibration every time you take a new image. this way one could check the average every time. maybe i will take that one once i get the rest to work.)

now that i got good calibration files i plugged them in to the cameraLucida_ofxKinect example and got pretty good results.
it projects the coloured depth image on to objects, but with an offset that seems to change depending on where the object is located. also if i go further then about 1.8 meter away the depth image disappears.
From former experience with ofxKinect i know that it should see up to 4 meters. (i guess i leave that for tomorrow)