Projector-kinect calibration

Hey all,

Trying to calibrate kinect-projector. Tried tools such as RGBDemo and Elliots VVVV Patch, and ofxCamaraLucida.

However, I want an pure-OF implementation of the method shown here:

I would use kyle’s ofxCV addon to call the openCV methods.

This would be they way I’d implement it. Now I ask, is this the right way, am I forgetting something?
Could this work? Anyone wants to contribute? I might need some help, esp with the last steps 10-12 :smiley:

Are there some “open cv noobs user friendly” examples to be found? Or some clear documentation?

//Kinect setup

  1. get calibrated rgb image & depth image
  2. generate world image (rgb+depth)

//porjector output
3) Find projector resolution
4) Generate fullscreen second window
5) draw 8x6 chessboard pattern with a given spacing (openCV’s DrawChessboardCorners)
6) output the projector coordinates of the innner corners for later use

//grab 10 - 30 images while projecting chessboard on flat face
7) Use openCv’s FindChessboardCorners routine to get the location of the inner corners in the world image (xy thus also z)
8) When found and stable for 2 seconds, grab the coordinates in both projector & kinect space, add both sets to queue
9) when queues contains enough datapoints, continue

//Calibrate the projector
10) using each pair of coordinates, and projector resolution, call the openCv CalibrateCamera method
11) obtain in-extrinsics
12) calculate view transform & projection transform
13) Check reprojection error, if >1 go to step 7.

14) apply this to the kinect input (cfr ofxCamaraLucida)


This would be my guide for 10-12, but it is more complex then anticipated (isn’t it always)


I’ll keep adding info as I go :smiley:

1 -> 6: done.

first issue is the findChessboardCorners. It works, pretty good actually, but it is soo slow (like 250 ms/frame (640x480) -> FPS = 4)

I might be missing some obvious opencv construct or something?

Current code:

//get input img  
        ofxCvColorImage* color = kinect.getColorImage();  
//cv code  
	Mat colorImage = toCv(color->getPixelsRef());  
	cv::Size patternSize = cv::Size(6, 4);  
	vector<Point2f> pointBuf;  
	bool found = findChessboardCorners(colorImage, patternSize, pointBuf, chessFlags);  
//finetune, even without this code its only 5 FPS.  
	if(found) {  
		Mat gray;  
		cvtColor(colorImage, gray, CV_RGB2GRAY);  
		cornerSubPix(gray, pointBuf, cv::Size(11, 11), cv::Size(-1, -1),  
		TermCriteria(CV_TERMCRIT_EPS + CV_TERMCRIT_ITER, 30, 0.1));  
		drawChessboardCorners(colorImage, patternSize, Mat(pointBuf), found);  
//draw result, doens't influence FPS  
	ofPixels t;  
	toOf(colorImage, t);  
	ofImage tmp = ofImage(t);  

Compiled in release mode with openmp and compiler/linker optimizations, on win7 and VS2010 + OF074.

In VVVV the method is way faster, getting 30fps @ 640x480, and only using like 10% cpu (not even 1 core @ 100%), so I guess i’m really missing something?

fixed it by using a .5 rescale for initial scanning & board stability checks, once board is stable it analyses the full-res and puts the world/image points in the queue.

so 7 - 10 are done thanks to these awesome links:

pixel reproj error is still to high but getting there…

Hey Kj1,

I tried to do this some months ago, unfortunately without success back then.
From what I remember, I was getting correct extrinsic matrix (easy to verify by measuring the distance between the projector’s and kinect’s lenses) but I kept having weird wrong center point values in my intrinsic that blew up the result.

You’ve probably found it already but Elliot Wood had himself started an oF implementation of his routine during an Art and Code’s workshop. It’s not complete but the code is logically a good base to start :

If it can help, I’ve pushed my attempt on github :

I would love to see this working in oF so I’ll be happy to contribute!

1 Like

Hey Kikko,

Thanks for the input!

I’ve managed to get it semi-working using the MS SDK + lots of elliots tutorials + lots of github source including your link + ofFenster (2nd window).
Currently, I got all the intrinsics/matrixes out of the calib routine and generating the precious YML file!
However, there are some serious issues things to fix before releasing a user-friendly tool.

Now i got the intrinsics, i am able to reproject (pixelbased only currently). so I can calculate for any world position (x,y,z) the screen coordinate. I cannot do it using shaders or the glmatrixf functions (yet); but i’m getting there! I also can’t get my head around the use ofxCamaraLucida, but it has some inspiring code :slight_smile:

I’ill take a look at your code, any bit helps :slight_smile: I’ll put mine on github tonight. Are you intrested in adding support for the openni (ofxOpenNi) & freenect (ofxKinect) drivers? I’ve used a slightly modified version of ofxKinectNui. If all three are supported and tested, i think we could release this as an addon!

Sounds good!

Regarding the loading of in-extrinsic matrices (via YML files) into GL cameras, Mapamok from Kyle Mc Donald provides a quite straightforward way of doing so

Otherwise yes, I’ll be a happy to help on supporting these 2 backends!

Edit : you can checkout the loadCalibration() method of this commit :

The model matrix is then loaded in the draw method :

	if(calibrationReady) {  
		intrinsics.loadProjectionMatrix(10, 2000);  
		if(getb("setupMode")) {  
			imageMesh = getProjectedMesh(objectMesh);	  

okay having some very freaky stuff during the calibration phase again. Tried putting it in a nice gui and so forth, now its all screwed up. The problem is how I call calibrateCam with arrays of arrays. (

this doens’t work:

//worldCoordinatesChessboardBuffer is vector of vector of cv::Point3f with world coordinates of chessboard corners  
//imageCoordinatesChessboardBuffer is an vector of vector of cv::Point2f indicating the coorinates of the chessboard corners in the projected image, so basically a always the same points...  
reprojError = calibrateCamera(worldCoordinatesChessboardBuffer, imageCoordinatesChessboardBuffer, cv::Size(projectorResolutionX, projectorResolutionY), cameraMatrix, distCoeffs, boardRotations, boardTranslations, flags);  

High reprojoection error, strange reprojections, etc.

See this screenshot, note the reprojected points all screwed up:[/img]

but this does work, when all transferring the points to one array:

//worldCoordinatesChessboardBuffer is vector of vector of cv::Point3f with world coordinates of chessboard corners  
//imageCoordinatesChessboardBuffer is an vector of vector of cv::Point2f indicating the coorinates of the chessboard corners in the projected image, so basically a always the same points...  
//code from the correlate example -> put everything in one array  
	vector<vector<Point3f> > vvo(1); //object points  
	vector<vector<Point2f> > vvi(1); //image points  
	for (int i=0; i<worldCoordinatesChessboardBuffer.size(); ++i) {  
		for (int j = 0; j<worldCoordinatesChessboardBuffer[i].size(); j++) {  
	//actual calibration  
	reprojError = calibrateCamera(vvo, vvi, cv::Size(projectorResolutionX, projectorResolutionY), cameraMatrix, distCoeffs, boardRotations, boardTranslations, flags);  

low reprojoection error, works as expected, etc.

See this screenshot:

is this some strange opencv array swap thing, or something? Or am i completely abusing the calibratecam function of opencv?


full code on

Hey Kji,

Nice! I remember facing the same thing and ended up doing something like :

vvo[0] = worldCoordinatesChessboardBuffer;  
vvi[0] = imageCoordinatesChessboardBuffer;  

Which ended up giving nice reprojection results as well.

Appart from multiple backend support, what is missing in your approach to get the kinect view projected with the video projector ?

Otherwise, I use osx as my primary platform.

its odd, the vector of vector approach should work according to the docs. I really want to understand this stuff, i dont like to release things which I don’t know of why they do work :smiley:

Ive created abstract base classes for implmenting the backend (see github) which should make it easy to implemnt other backends (even threaded ones). Other then that, i’m adding a “test calibration” feature; to test wether the calibration works as expected by projecting onto moving objects.

Currently, the cv::reporject approach works (although its mirrored vertically), the opengl approach (from your link) works but is mirrored both horizontal and vertical (which I cant seem to “unmirror”). I’ll post later on that and push the latest version to github.

if all those things are done, i’m gonna make an addon with some descent samples, docs, credits and maybe a video, should be released by the end of the weekend.

Don’t have much to add, just wanted to say that after a week of compiling + fixing rgbdemo only to have ofxCamaralucida not actually work with my camera, you guys are my heroes.

Looking forward to a release!

Whats your depth cam? RGBDemo supports most depth cams as I remmeber

Kikko: don’t start implementing the backends just yet. The current structure doens’t make sense.

The new abstract base class will just be a wrapper around the kinect object. Its more logicial, then you can setup the kinect in the testapp.cpp like we’re used to do.

Actually, all ofx kinect addons should implement a same 3DCamera base class imo, would be much easier!

The kinect calibration repo is updated in proper addonstyle (i think/hope).
Its far from a release, still has a lot of bugs and so, but it works (in my setting).

the example is windows only and requires ofxFenster, ofxKinectNui & MS kinect >1.5 drivers to work.
Performance is low but the kinect update takes 30ms, go figure. Somehting for tomorrow :slight_smile:

Hey! If you guys are still working on this, you might find it some similar stuff in these (far from complete) repos:

Anyway, some nice tips in here, and that ofxCamaraLucida addon looks nice!

video of the calibration in action:

some more info about the project:
and some pretty pictures:

All made for MCBTH, a produciton of theatre Toneelhuis:!/nl/readmodus/production/?id=6302

1 Like

HI Kj1, do you plan to update ofxProjectorKinectCalibration as ofxFenster does not work for 0.8? Thanks!

Hi Miu,

Sorry I do not plan on updating this to 0.8 as of yet. Feel free to fork & adjust so it works.

I’m looking for a coordinate converter function, from kinect skeleton tracked, to realworld ones. My aim is to project the tracked skeleton on the user body.
This for sure implies a calibration procedure to generate the yml file, and some tools seems to be available online.


  • i could have a system like the one you used in MCBTH, so that i could put the skeleton on a mesh and then project it. Cool, but no tool seems easily available online.
  • easier solution: a mathematical function that, given a 3d point and a yml file, simply generates the 3d coordinates in realworld. So the projector could put it on the right position of my body.

Am I wrong, or is this a right way?

Is this function already available somewhere in the code you did for MCBTH (in this thread, you said in an early stage of development you succeeded to do that)?

Would be possible to build a Quartz Composer plugin that incorporate that code - i’m actually drawing the skeleton with QC?



I (as a shy beginner guy) decided to write to you guys after a long time. Finally, I have got some .yml files from RGB Demo’s calibrate_projector module and now I am struggling to compile ofxCamaraLucida.

@kj1 : at the end of the calibration video above, the silhouette seemed to fit great (fast response/little error) , but in the MCBTH video ( ), there seems to be some error (around 01:40) in fitting , I guess?! My question is: Is this the best it gets? Can we compare the performance of an OF implementation with a Processing implementation (e.g. Gene Kogan’s … I found it the handiest to set and go.) ?

Will I have trouble trying to compile MCBTH (switching to an earlier version than 008, I guess?!) , as a guy who failed (buy now) compiling ofxCamaraLucida?

+1 for michele’s QC wish, that would be awesome, at least for an impatient beginner like me.

Thanks for all you shared up to now,
Respect :slight_smile:

Ekin Horzum

Mostly correct: you’ll convert a 3D world position (from the kinect skeleton) into a 2D projector coordinate.
The code for converting one point can be found here:

Line 28 -> 38
Basically it expects a depth coordinate (skeleton data is in depth x-y). It looks up the world coordinate (line 30) in XYZ (expressed in meters). This is then transformed using the yml file to the projector 2D space (line 36). All the other code is converting (not very efficiently) from & to opencv. Much more performant options are possible here :slight_smile:
good luck!