In interfacing with the Microsoft Kinect SDK 2.0 (and the Intel RealSense SDK R2), I’m using the built-in coordinate mapper(s) to translate coordinates from depth image to color image to camera space - useful when building point clouds, for instance.
I noticed that Elliot Woods, in ofxKinectForWindows2::Source::Depth, uses the Kinect coordinate mapper to, in one line, fill in the the texture coordinates for an ofMesh. The coordinate mapper dumps the UVs in an array - but it expects an array of datatype ColorSpacePoint (a struct of 2x 32bit floats).
Elliot simply casts the ofMesh’s texture coordinates vector data pointer to (ColorSpacePoint *), and it works like a charm.
The line in question is:
this->coordinateMapper->MapDepthFrameToColorSpace(frameSize, this->pixels.getPixels(), frameSize, (ColorSpacePoint*) mesh.getTexCoordsPointer());
(line 152 in ofxKinectForWindows2::Source::Depth)
Clearly, this works. But I want to make sure I understand why it works.
I think it’s because an ofVec2f is essentially just storing 2x 32 bit floats, so in memory it’s equivalent to a ColorSpacePoint struct… and that is because the only (non-static) variables in an ofVec2f are the x and y floats? E.g. the x and y of both data types are the only things stored on the stack. And the rest of the ofVec2f class is stored on the heap? (I’m still fuzzy on these sorts of distinctions… learning as I go… have mercy)
A related question is why is that line just using a simple cast, whereas further down in lines 191-193, Elliot uses
reinterpret_cast<CameraSpacePoint*> for a similar operation:
this->coordinateMapper->MapColorFrameToCameraSpace( this->pixels.size(), this->pixels.getPixels(), this->colorFrameSize, reinterpret_cast<CameraSpacePoint*>(world.getData()));
What’s the difference between the normal cast and the reinterpret_cast? I found this reference but it seems like it’s written for someone that already knows the answer…
thanks in advance!