Dynamic 3D projection mapping with head tracking

Hi guys,

I’m looking into creating a dynamic 3D projection mapping.
For a normal 3D projection mapping two rendering passes are needed; the first one from the viewers perspective and the second from the perspective of the projector while the texture from the first pass is projected onto the 3d model of the scene. I was hoping it would be possible to do this in openFrameworks to make it realtime, and use the kinect to track the viewers location.

I haven’t ventured this far into graphics programming before, but I want to learn the things needed to achieve this.

  • Is this actually doable in openFrameworks?
  • What I’m most uncertain about on how to do is on projecting a texture? I’ve searched around but can’t find how this is done, I’m sure I’m not the first one to try this?
  • The projectors properties will need to be matched by the virtual camera. I’ve done this before in 3D software by doing some measurements and fiddling with the camera settings until it ‘looked close enough’. Is there a better (and hopefully more automated) process of doing this? Is something like https://github.com/jhdewitt/sltk/wiki/projector-lens-intrinsic-calibration-example of any help?

Any help or tips on what I should look into to understand how this works would be much appreciated :slight_smile:

Thanks,
Gerben

I’ve got an example app going for the projective texturing based on the code from Projective texture mapping GLSL. To be able to determine the vector location in world space I’m passing the transformation matrix of the textured object to the shaders as well, allowing to project a texture on multiple objects. If anyone is interested I can share the code?

For the projector calibration it seemed that using opencv’s cameraCalibrate was the way to go. After finding mapamok it seemed sensible to use a 3D model for calibration instead of a checkerboard. I couldn’t get mapamok to work or recompile, so I’ve made my own dumbed-down version.

The last hurdle seems to be applying the calibration and getting the objects and camera’s properly arranged in a scene, and depending on how that happens how to apply translation to the viewer’s perspective camera
Mapamok’s approach of applying the calibration is this, but haven’t been able to work out how I can manipulate the camera’s location, or how to apply this to a normal ofCamera.

glPushMatrix();
glMatrixMode(GL_PROJECTION);
glPushMatrix();
glMatrixMode(GL_MODELVIEW);

if (calibrationReady) {
    intrinsics.loadProjectionMatrix(10, 2000);
    applyMatrix(modelMatrix);
    render();
}

glPopMatrix();
glMatrixMode(GL_PROJECTION);
glPopMatrix();
glMatrixMode(GL_MODELVIEW);