about your post above (“i think, it should be possible like that but maybe i am inventing a problem here and there is a simpler solution for that?”):
I indeed think there is a much simpler solution. In settings like you have (as far as I understand) I personally omit the real 3D situation (stage) as it will get quickly complicated and even might be imprecise (especially when projector place and optics(!) change). And also it is uneccessary (again: as far as I understand…):
I have 3D coordinates of the Kinect. I then draw the object in an abstract 3D space (which means virtually no calculations needed, only scalings to fit the size) to an FBO (which actually makes it 2D). The scaling of z might be a different one rom x and y to get the right depth feeling, but this is easily done. The FBO is what I project.
To get it mapped on the stage I use the brillant ofxBezierWarp by Matt Gingold (or any other mapping tool, even external ones, if you use Syphon/Spout).
So I do not have to do heavy calculations but just do the mapping by sight.
The connections from Kinect 3D space to stage 3D space are this way done implicitly and intuitively. Also any kind of distortion (contortion?) in projector optics can be corrected easily.
hope this helps.
Please just ask, in case you want to go this way and have questions.
have a good day!