Using ofxKinectForWindows2 how to get the right color for a depth point?

I’m using @elliotWoods’ excellent ofxKinectForWindows2 addon. I noticed that in the exampleWithBodyIndex, they draw the mesh with the RGB data by binding the colorSource and then drawing the mesh like so

kinect.getColorSource()->getTexture().bind(2);

ofSetColor(255);
ofMesh mesh = kinect.getDepthSource()->getMesh(bStitchFaces, ofxKFW2::Source::Depth::PointCloudOptions::ColorCamera);
mesh.draw();

kinect.getColorSource()->getTexture().unbind(2);

The color source is a 1920x1080 and the mesh is the depth resolution of 512x424. The mesh has a pretty reasonable mapping of the depth to the color (though it is still off). In my original attempt to explicitly add a color to each vertex of the mesh, i arbitrarily chose the first 512x424 points and the alignment was WAY off.

In the actual windows kinect library, there’s a class called CoordinateMapper with a function MapDepthPointToCameraSpace, which should map a point from depth space to RGB space and vice versa. How do we access that, or is there a way I could get the pixels out of the texture space in a way that lines up to the depth mesh using that ofxKFW2::Source::Depth::PointCloudOptions::ColorCamera?

Hmm, from what I can tell, the addon has a protected member coordinateMapper, which does the mapping (and it does a decent job). When you query the mesh that way, the mesh coordinates already point to the right points (some of them are outside the bounds of the rgb camera, which then gets clamped). I’ll update here if I get any further

Depth::getColorToWorldMap and Depth::getDepthToWorldMap() seem to return color-depth conversion arrays.

Btw, I wonder if getCoordinateMapper() kind of function is needed since Joint::getProjected() requires coordinateMapper.