Trouble using Kj1's Kinect2ProjectorCalibration in my own project

My plan is to use Kj1’s example as a calibration tool. Once calibrated, load the generated .yml into an instance of KinectProjectorOutput to transform my tracked blobs into the correct projector space. Aside from using ofCv instead of ofxCv, and doing my tracking on the raw depth rather than the bodyIndex frame, I am doing almost exactly as kj is in the example .

Even when using the same tracker, I’m still getting projected points that are all the same and way out of the range of my projected image e.g. {x=-4705.0, y=-5000.0}

I’ve (double) checked that the yml is loaded corrects, but I don’t have a firm enough grasp on what is going on to debug the underpinnings.

Any help on reusing this great example would be appreciated.

I was able to figure out that I needed to initialize a few more streams and use the same values for kinect.setDepthClipping. I am still getting a majority of useless points from the conversion: mostly -nan.

What kinect addon are you using? During calibration, is the chessboard correctly calibrated and reprojected correctly?

see this thread for more information: Projector-kinect calibration

Also note that it is quite outdated and I expect better solutions to be out there :slight_smile:

Kj,
Thanks for the response. I’m using your branch of ofxKinectV2. I’ve gotten good reprojection results with your software and as few as 5-10 samples of the chessboard. I am looking into other solutions, but they all use similar matrices. Right now i’m using ofxGLWarper to ~MaNUALLY~ (ick) reproject my images. I’m getting a better handle on how these matrices work, so my next step may be to load the matrix *.yml in and modify roymacdonad’s code to accept it in the constructor.

I failed to mention, I am using your projectFromDepthXYVector() method, just like your example. I assume that the bodyIndex and depth streams are in the same coordinate space.

hmm there are some leftovers from some experiments (mirroring etc) which might explain the results. Also, half of the depth data is usually invalid or noise, which might lead to NaN’s etc. Make sure to check the validity of the depth points in advance.

Try to use the basic function (which works per point instead of vector, hence less efficient) but it will likely pinpoint the problem:

//First, convert your depth point to world manually:
ofPoint ptWorld = kinect->mapDepthPointToWorldPoint(ofPoint xyDepthCoordinate);
//check if the world point is valid - if not it equals (0,0,0)
if (ptWorld.x != 0 && ptWorld.y != 0 && ptWolrd.z !=0) {
    //reproject with basic function
    ofPoint reprojected = kinectProjectorOutput.projectFromWorld(ptWorld);
   //is this correct?
}