ofxKinectCalibration as part of ofxKinect seems to be technically complete at a glance
but the results (for me) aren’t very accurate.
example of a point cloud:
Note the calibration on the hand/fingers
Has anybody else had any experience with this?
I did the calibration using a chess board pattern, using this tool:
then put the intrinsics coefficients hardcoded in the code with the intention of adding some calibration algorithms later, but then never had time to add them.
In the kinects i’ve tried it works more or less but of course it makes sense that the parameters are slightly different for different cameras.
Kyle told me there’s a way to ask the kinect for the calibration parameters, it seems openNI uses that and libfreenect was implementing it, if they haven’t already.
aha. i just presumed the calib params OpenNI pulls from kinect were just identical across devices. (i can’t quite imagine the job on the chinese assembly line where someone dances the chessboard all day long).
I wonder how much ‘out’ each kinect is wrt to each other.
It might be batch based.
also it may be possible to identify the specific parameters that vary between devices.
and then only calibrate for those.
I’ve made a decent enough implementation of ofxCv to output calib params for depth<>rgb in the past. but was hoping to bypass that stage.
But anyway, this RGBDemo looks very sweet, lots of great features to play around with.
Just wish I hadn’t left my chessboard in korea…
time to use up some of that black liquid gold.
Kyle removed the calibration and movie recorder/player as we are now using an updated libfreenect that includes registration (aka calibration).