ofxbox2d mapped puppet (using ofxopenni) and collision detection help


Hi everyone,

I am a grad student still learning openFrameworks and my c++ may be a bit rusty… I was looking at sosolimited / kinectcharacter project as an example to learn the kinect puppet dynamics - which made a lot of sense. I wanted to take it to the next step and see if I can add collision detection as an interaction mechanism. And I’ve been pulling my hair out, trying to figure this one out for couple of weeks and I would like inputs from the experienced people!

For each body part, joint position is calculated as a midpoint then rotated using ofRotate function. This make sense to rotate the limbs at joints. I mapped the ofxbox2drect object behind each body part (the white rectangles behind the crow).

When I call this function in testApp.cpp, the ofxBox2drect object (the falling logs) does not interact with rotated limbs. This is due to the fact that my ofxbox2drect object for the body parts are on their own rotation translation (offset by the joint position + rotation from position 0,0). The falling logs are not part of these translation so it keeps missing the limbs.

Do you have any suggestion on a better way to do this? If there is a best practice on how to use box2d and kinect based puppet, I would really appreciate it!

Code - puppet body part
[tt] for (int i=0; i<user_generatorMain.GetNumberOfUsers(); i++) {

int id = i+1;

// check user being tracked
if(!user_generatorMain.GetSkeletonCap().IsTracking(id)) {
//printf(“Not tracking this user: %d\n”, id);

xn::SkeletonCapability pUserSkel = user_generatorMain.GetSkeletonCap();

// doing an average of joints 1 and 2 here for situations like torso where there is no midpoint joint
XnPoint3D position[3];
XnSkeletonJointPosition jointPos[3];
pUserSkel.GetSkeletonJointPosition(id, joints[0], jointPos[0]);
pUserSkel.GetSkeletonJointPosition(id, joints[1], jointPos[1]);
pUserSkel.GetSkeletonJointPosition(id, joints[2], jointPos[2]);

position[0] = jointPos[0].position;
position[1] = jointPos[1].position;
position[2] = jointPos[2].position;

// check accurate enough
if(jointPos[0].fConfidence < 0.3f || jointPos[1].fConfidence < 0.3f || jointPos[2].fConfidence < 0.3f) {

depth_generatorMain.ConvertRealWorldToProjective(3, position, position);

position[0].X = position[0].X*ofGetWidth()/640.0f;
position[0].Y = position[0].Y*ofGetHeight()/480.0f;

position[1].X = (position[1].X+position[2].X)*ofGetWidth()/(2.0*640.0f);
position[1].Y = (position[1].Y+position[2].Y)*ofGetHeight()/(2.0*480.0f);

XnPoint3D midPosition;
midPosition.X = (position[0].X+position[1].X)/2;
midPosition.Y = (position[0].Y+position[1].Y)/2;

float angle = M_PI/2 + atan((position[1].Y-midPosition.Y)/(position[1].X-midPosition.X));
if (position[1].X > position[0].X) angle -= M_PI;
printf("%f %d %f %f %d %f %f\n", ofRadToDeg(angle), joints[0], position[0].X, position[0].Y, joints[1], position[1].X, position[1].Y);


float h = scale * abs(ofDist(position[0].X, position[0].Y, position[1].X, position[1].Y));
float w = h * (float)img.width / (float)img.height;

[glow=yellow,2,300] ofTranslate( midPosition.X, midPosition.Y );
ofRotate(ofRadToDeg(angle), 0, 0, 1);

b2dPart.setup(world->getWorld(), 0,0, w*.60, h*.60);
b2dPart.draw(); // box2d rectangle object mapped to a body part
img.draw(0, 0, layer, w, h); //limb image png for a body part


Code- TestApp.cpp - draw()
void testApp::draw(){


for(int i=0; i< b2dLogs.size(); i++){

b2dLogs[i].draw(); //falling log

if (isLive) {

if (isTracking) {
ofTranslate(skeletonPos.x, skeletonPos.y);
ofScale(1, 1);
//recordUser.draw(); //debug

[glow=yellow,2,300]curCharacter->draw(&box2d);[/glow] //drawing a character, passing in the current box2d instance



Any assistance, guidance is much appreciated… thank you in advance.