Projection Mapping Calibration

I have an openCV question that I’m hoping someone can maybe help me with…

It’s not an original idea - take 4 points with known 3d space coordinates and the 2d screen coordinates that they map to. From this, find the model view matrix such that I can properly orient myself and draw in the proper 3d space. This is similar to the augmented reality stuff that was making the rounds a few years ago – track the orientation of a checkerboard in a camera input and draw stuff on top of it. Only I’m just trying to specify the orientation myself with no camera tracking.

I’m using openCV::solvePnP to get my “camera” rotation and translation info given a set of input 2d and 3d points. But… it’s not working.

I’ve looked for examples online and can’t figure out a number of things. First: the cameraMatrix input. I don’t understand these parameters as they relate to a viewport. Then - I’m not sure how I get my modelView matrix from the solvePnP output.

Here’s my project:


(oF for osx v0.8.4)

Hit the space bar to see what I’d LIKE the output to be, using what I’ve determined should be the correct modelView matrix. Hit the space bar again to see the rendering against my (erroneously) calculated modelView matrix.

Also, moving the mouse rolls the camera, but its position doesn’t change. So why does my tvec output from solvePnP change? That doesn’t seem right. I must be doing something wrong on the input to solvePnP. (Probably the cameraMatrix?)

Any ideas? I feel like I’ve been hacking at it with a machete for days because I don’t have a good enough handle on the linear algebra. Please help!