Trying to calibrate kinect-projector. Tried tools such as RGBDemo and Elliots VVVV Patch, and ofxCamaraLucida.
However, I want an pure-OF implementation of the method shown here:
I would use kyle’s ofxCV addon to call the openCV methods.
This would be they way I’d implement it. Now I ask, is this the right way, am I forgetting something?
Could this work? Anyone wants to contribute? I might need some help, esp with the last steps 10-12
Are there some “open cv noobs user friendly” examples to be found? Or some clear documentation?
- get calibrated rgb image & depth image
- generate world image (rgb+depth)
3) Find projector resolution
4) Generate fullscreen second window
5) draw 8x6 chessboard pattern with a given spacing (openCV’s DrawChessboardCorners)
6) output the projector coordinates of the innner corners for later use
//grab 10 - 30 images while projecting chessboard on flat face
7) Use openCv’s FindChessboardCorners routine to get the location of the inner corners in the world image (xy thus also z)
8) When found and stable for 2 seconds, grab the coordinates in both projector & kinect space, add both sets to queue
9) when queues contains enough datapoints, continue
//Calibrate the projector
10) using each pair of coordinates, and projector resolution, call the openCv CalibrateCamera method
11) obtain in-extrinsics
12) calculate view transform & projection transform
13) Check reprojection error, if >1 go to step 7.
14) apply this to the kinect input (cfr ofxCamaraLucida)
This would be my guide for 10-12, but it is more complex then anticipated (isn’t it always)