I’m using @elliotWoods’ excellent ofxKinectForWindows2 addon. I noticed that in the exampleWithBodyIndex, they draw the mesh with the RGB data by binding the colorSource and then drawing the mesh like so
kinect.getColorSource()->getTexture().bind(2); ofSetColor(255); ofMesh mesh = kinect.getDepthSource()->getMesh(bStitchFaces, ofxKFW2::Source::Depth::PointCloudOptions::ColorCamera); mesh.draw(); kinect.getColorSource()->getTexture().unbind(2);
The color source is a 1920x1080 and the mesh is the depth resolution of 512x424. The mesh has a pretty reasonable mapping of the depth to the color (though it is still off). In my original attempt to explicitly add a color to each vertex of the mesh, i arbitrarily chose the first 512x424 points and the alignment was WAY off.
In the actual windows kinect library, there’s a class called CoordinateMapper with a function MapDepthPointToCameraSpace, which should map a point from depth space to RGB space and vice versa. How do we access that, or is there a way I could get the pixels out of the texture space in a way that lines up to the depth mesh using that