Tracking someone

Hi all,
I am compleatly new in openFrameworks and I would like to learn how to track someone and put on top an image.
I know just the very basics of openFrameworks, so if someone wants to guide me with what I should do I would appreciate it.
Thank you

Can you be a bit more specific? what do you want to track? The face? The body? where do you want to place the image?

Hi, thanks for your replay.
I am trying to learn openFrameworks, kinect, code Blocks and Processing. What I want to do is to learn to track movement and map the projection.

Please look at this video:

He put a photo on top of the skeleton that is recognized by the kinect and I would like to know how to do it.

and this video changes the bodies of the participants, and I think it follows the same principle:

I have the ofxOpenNI but I don’t know how to use it.

I would appreciate any help in order to achieve that.

It’s very easy to do tracking (thanks to OF) I am not going to be very specific mainly because I don’t think you are ready yet to fully grasp the coding part, but also because you can search the internet and the forum to find the answers once you know the general ideas…

Using the Kinect:

kinect is an infrared camera, it also has a normal camera

The openNI library has an example, both in processing and in OF that does half of what you want to do.

Skeleton tracking is the term you should search for . the catch is that the user must do a specific pose to activate the skeleton. (like in the video)
you should/could limit the depth of the kinect for more accuracy. because the kinect is using infrared you can do that. The closer something it is to the camera the brighter it appears to be. without the infrared you need to have a more controlled environment (a white wall behind the user)

Then you can draw images or 3d models using the position of the joints of the skeleton.

to do this you must first understand the basic concepts of Object Oriented Programming,

SimpleOpenNI.SKEL_HEAD line for example is something you will find in the code. simpleOpenNI is a class that handles the entire skeleton,

the dot operator is what allows you to access values inside that object. SKEL_HEAD is the position of the head. again it’s an object by it self inside the skeleton object that caries 3 values x,y,z like PVECTORS in processing or ofPoint or ofVec2f in openframeworks etc. so by using the . dot operator you can access those values and then use them to draw your image/ 3d model etc.
much like the SKEL_HEAD you can access other points on the skeletons like that
SimpleOpenNI.SKEL_LEFT_HIP.x , SimpleOpenNI.SKEL_TORSO.y etc.

here are two useful links for you: it’s probably a good idea to learn processing first before jumping into c++ and openframeworks.

The guy on the second example that did the body swap is using all the above plus openGl to map the images using a vertex shape or smthng.
If he used a 3d model he could have pasted the texture on the model and achieved better results. but his approach is very different and interesting.
At this point the best you can do is study the code in the examples, learn c++ or processing and you should be on your way in a few days…

1 Like

thank you very much for your replay. Very useful indeed at this point.
I am going to look at those links and the examples and I will coming with more specific questions.

Hi everyone!! im also doing something similar of skeleton tracking and then mapping it with some png’s… i havent got to that point yet, but i would like to… i’ve being programming c++ 2 months ago and i already have some doubts…

First of all thx to kkkkkk for his answer cause it has help me a lot… but i would like to make some question about it… im playing around with the OpenNI addon from Roymcdonald or Gameover, and i dont know really what to do, should i start a new project and “import SimpleOpenNI.*;” or continue as i was doing that was creating a project from the OpenNI examples by “#include OpenNI.h” ?? … also i would like to know where should i put the “import SimpleOpenNI.*;” it should go in the “test.cpp” or in the “XnOpenNI.cpp” …

Thx a lot and i look forward for some answering…

If you are a beginner you can duplicate all the examples in OF and play with them… brake them etc…

You can ofcourse you can build your own projects from scratch once you decide what you want to do.

the import or include SimpleOpenNI.h etc… goes in the testApp.h or wherever you want to use them.

include or import commands are just saying to the compiler to include those classes to the code.
if you don’t know what you are doing… go easy on them because you might create a redefinition problem
(declaring same objects twice)

but in order for them to appear in your code and be able to import them… (if you are not using the examples) you need to “link” them in the xcode project (if you are using mac)
to do this you just drag the addons folder from your /…/openframeworks/addons/ to your addons xcode linker-folder…

Or if you are using opeframeworks 0073 you can use the project generator example to automatically create the project you want with all the addons you are planning to use…

Thx kkkkkkk for answering so fast very thankfull btw…

I just want to ask u something more… i made since i started everything u said, i played with examples, i brake them, i build my own begginers projects etc… (firstly i should said that im using Xcode and ofx0073) i started a new project with the project gerenator example adding openNI and openCV, i have included in the test.h “#include ofxOpenNI.h” and “#include ofxOpenCV.h”… but already i cant understand where should i put this code for skeleton tracking, i think it should go in the testApp.cpp, cause as told me before the in the testApp.h im gonna put just the functions im gonna develop later in the testApp.cpp right?

import SimpleOpenNI.*;  
SimpleOpenNI  context;  
void setup()  
  // instantiate a new context  
  context = new SimpleOpenNI(this);  
  // enable depthMap generation   
  // enable skeleton generation for all joints  
  // create a window the size of the depth information  
  size(context.depthWidth(), context.depthHeight());   
void draw()  
  // update the camera  
  // draw depth image  
  // for all users from 1 to 10  
  int i;  
  for (i=1; i<=10; i++)  
    // check if the skeleton is being tracked  
      drawSkeleton(i);  // draw the skeleton  
// draw the skeleton with the selected joints  
void drawSkeleton(int userId)  
  context.drawLimb(userId, SimpleOpenNI.SKEL_HEAD, SimpleOpenNI.SKEL_NECK);  
  context.drawLimb(userId, SimpleOpenNI.SKEL_NECK, SimpleOpenNI.SKEL_LEFT_SHOULDER);  
  context.drawLimb(userId, SimpleOpenNI.SKEL_LEFT_SHOULDER, SimpleOpenNI.SKEL_LEFT_ELBOW);  
  context.drawLimb(userId, SimpleOpenNI.SKEL_LEFT_ELBOW, SimpleOpenNI.SKEL_LEFT_HAND);  
  context.drawLimb(userId, SimpleOpenNI.SKEL_NECK, SimpleOpenNI.SKEL_RIGHT_SHOULDER);  
  context.drawLimb(userId, SimpleOpenNI.SKEL_RIGHT_SHOULDER, SimpleOpenNI.SKEL_RIGHT_ELBOW);  
  context.drawLimb(userId, SimpleOpenNI.SKEL_RIGHT_ELBOW, SimpleOpenNI.SKEL_RIGHT_HAND);  
  context.drawLimb(userId, SimpleOpenNI.SKEL_LEFT_SHOULDER, SimpleOpenNI.SKEL_TORSO);  
  context.drawLimb(userId, SimpleOpenNI.SKEL_RIGHT_SHOULDER, SimpleOpenNI.SKEL_TORSO);  
  context.drawLimb(userId, SimpleOpenNI.SKEL_TORSO, SimpleOpenNI.SKEL_LEFT_HIP);  
  context.drawLimb(userId, SimpleOpenNI.SKEL_LEFT_HIP, SimpleOpenNI.SKEL_LEFT_KNEE);  
  context.drawLimb(userId, SimpleOpenNI.SKEL_LEFT_KNEE, SimpleOpenNI.SKEL_LEFT_FOOT);  
  context.drawLimb(userId, SimpleOpenNI.SKEL_TORSO, SimpleOpenNI.SKEL_RIGHT_HIP);  
  context.drawLimb(userId, SimpleOpenNI.SKEL_RIGHT_HIP, SimpleOpenNI.SKEL_RIGHT_KNEE);  
  context.drawLimb(userId, SimpleOpenNI.SKEL_RIGHT_KNEE, SimpleOpenNI.SKEL_RIGHT_FOOT);    
// Event-based Methods  
// when a person ('user') enters the field of view  
void onNewUser(int userId)  
  println("New User Detected - userId: " + userId);  
 // start pose detection  
// when a person ('user') leaves the field of view   
void onLostUser(int userId)  
  println("User Lost - userId: " + userId);  
// when a user begins a pose  
void onStartPose(String pose,int userId)  
  println("Start of Pose Detected  - userId: " + userId + ", pose: " + pose);  
  // stop pose detection  
  // start attempting to calibrate the skeleton  
  context.requestCalibrationSkeleton(userId, true);   
// when calibration begins  
void onStartCalibration(int userId)  
  println("Beginning Calibration - userId: " + userId);  
// when calibaration ends - successfully or unsucessfully   
void onEndCalibration(int userId, boolean successfull)  
  println("Calibration of userId: " + userId + ", successfull: " + successfull);  
  if (successfull)   
    println("  User calibrated !!!");  
    // begin skeleton tracking  
    println("  Failed to calibrate user !!!");  
    // Start pose detection  

Thx again for spending ur time answering :slight_smile:

you can import it in .cpp as well, it depends on what you want to do… but consider it as a beginner’s rule that you import/include things in .h and include only the object’s .h (like testApp.h) inside your .cpp files…

that way you have included all the files you have declared inside .h and you can avoid redefinition problems… etc.

ps #import is from java. or objC++ …in c/c++ we like to use** #include**

what is the code you pasted from?

from what I see this will not compile

also it appears like if you want to use stand alone functions… do you really want to do that?


case 1

If you want to use the SimpleOpenNI object inside testApp and you don’t want standalone functions declare it inside testapp.h

I see that you are using the new operator. this means that you dynamically allocating it and for that you should declare context as a pointer

do it inside public: of the testApp object (above public means it’s private you can also declare it as private)

this goes inside testApp:

SimpleOpenNI  *context;    

the above code means we declare a pointer of an SoNi Object. that we later going to make it a real object.
Now it’s just a pointer pointing to nothingness.

import SimpleOpenNI.*; in testapp.h (because we have declared context inside testApp declaration that exists in testApp.h)

and tranfer what is inside your setup/draw functions inside testApp.cpp in :: testapp::setup()/draw() etc…

make everything context. become context->

for your other functions that don’t exist already inside the testapp class like void drawSkeleton(int userId)

declare them again inside the testApp class inside .h like we did with the pointer context;

and then inside testApp.cpp

write them as

void testApp::drawSkeleton(int userId){  

and put the code inside them…

If I understood what you are trying to do correctly you should have a compiling code.

case 2

if the code you are using is part of an addon… written in a seperate .h .cpp files… (I doubt it)
but if this thing is an addon or something that is supposed to compile
include the name of the file .h inside testApp and use the stand alone functions in your code

case 3
alternatively assuming that you wrote this thing inside testApp. and assuming that you want to use stand alone functions and not fuctions inside testApp.
again… do the thing that I told you about . and -> and *
and declare those functions before in .h or just tranfer them in .h

(in c++ you can either declare and implement functions in separate files .h .cpp or you can do it… in a single file…

I recommend reading a book around oo programming in c++!! it’s not very difficult to wrap your head around things…

beginners tip:

if you are using code that you don’t really understand make sure you take small steps at all times… making sure your project compiles … and then by slowly adding things… detecting syntactic errors one by one…then… google them… understand why things must be one way instead of other way… and then proceed… don’t copy huge parts of code from java and then try to fix it at once…

small steps

Also know that in c++ they are MANY MANY MANY MANY ways to do things… it depends on what you want to do…

1 Like

Hi guys I’ve got the skeleton tracking working but now I’m trying to implement it where the skeleton can interact with objects with a scene. ANy advice on how to do this ?
code bekiw

void testApp::setup() {  
    pMouseX = 0;  
 pMouseY = 0;  
 qMouseX = 0;  
 qMouseY = 0;  
 rMouseX = 0;  
 rMouseY = 0;  
 sMouseX = 0;  
 sMouseY = 0;  
    //some model / light stuff  
    glEnable (GL_DEPTH_TEST);  
    glShadeModel (GL_SMOOTH);  
    /* initialize lighting */  
    glLightfv (GL_LIGHT0, GL_POSITION, lightOnePosition);  
    glLightfv (GL_LIGHT0, GL_DIFFUSE, lightOneColor);  
    glEnable (GL_LIGHT0);  
    glLightfv (GL_LIGHT1, GL_POSITION, lightTwoPosition);  
    glLightfv (GL_LIGHT1, GL_DIFFUSE, lightTwoColor);  
    glEnable (GL_LIGHT1);  
    glEnable (GL_LIGHTING);  
    glColorMaterial (GL_FRONT_AND_BACK, GL_DIFFUSE);  
    glEnable (GL_COLOR_MATERIAL);  
    //load the squirrel model - the 3ds and the texture file need to be in the same folder  
  tableModel.loadModel("actualtable.3ds", 20);  
    teaModel.loadModel("cupatea.3ds", 20);  
    //you can create as many rotations as you want  
    //choose which axis you want it to effect  
    //you can update these rotations later on  
    tableModel.setScale(0.3, 0.2, 0.2);  
    teaModel.setPosition(ofGetWidth()/2, ofGetHeight()/2, 0);  
    teapotModel.setPosition(ofGetWidth()/2, ofGetHeight()/2, 0);  
   tableModel.setPosition(ofGetWidth()/2, ofGetHeight()/2, 0);  
    plateModel.setPosition(ofGetWidth()/2, ofGetHeight()/2, 0);  
    spoonModel.setPosition(ofGetWidth()/2, ofGetHeight()/2, 0);  
    numDevices = openNIDevices[0].getNumDevices();  
    for (int deviceID = 0; deviceID < numDevices; deviceID++){  
        //openNIDevices[deviceID].setLogLevel(OF_LOG_VERBOSE); // ofxOpenNI defaults to ofLogLevel, but you can force to any level  
    // NB: Only one device can have a user generator at a time - this is a known bug in NITE due to a singleton issue  
    // so it's safe to assume that the fist device to ask (ie., deviceID == 0) will have the user generator...  
    openNIDevices[0].setMaxNumUsers(1); // defualt is 4  
    ofAddListener(openNIDevices[0].userEvent, this, &testApp::userEvent);  
    ofxOpenNIUser user;  
    user.setPointCloudDrawSize(2); // this is the size of the glPoint that will be drawn for the point cloud  
    user.setPointCloudResolution(2); // this is the step size between points for the cloud -> eg., this sets it to every second point  
    openNIDevices[0].setBaseUserClass(user); // this becomes the base class on which tracked users are created  
                                             // allows you to set all tracked user properties to the same type easily  
                                             // and allows you to create your own user class that inherits from ofxOpenNIUser  
    // if you want to get fine grain control over each possible tracked user for some reason you can iterate  
    // through users like I'm doing below. Please not the use of nID = 1 AND nID <= openNIDevices[0].getMaxNumUsers()  
    // as what you're doing here is retrieving a user that is being stored in a std::map using it's XnUserID as the key  
    // that means it's not a 0 based vector, but instead starts at 1 and goes upto, and includes maxNumUsers...  
//    for (XnUserID nID = 1; nID <= openNIDevices[0].getMaxNumUsers(); nID++){  
//        ofxOpenNIUser & user = openNIDevices[0].getUser(nID);  
//        user.setUseMaskTexture(true);  
//        user.setUsePointCloud(true);  
//        //user.setUseAutoCalibration(false); // defualts to true; set to false to force pose detection  
//        //user.setLimbDetectionConfidence(0.9f); // defaults 0.3f  
//        user.setPointCloudDrawSize(2);  
//        user.setPointCloudResolution(1);  
//    }  
void testApp::update(){  
    ofBackground(0, 0, 0);  
    pMouseX = mouseX;  
    pMouseY = mouseY;  
    qMouseX = mouseX;  
    qMouseY = mouseY;  
    rMouseX = mouseX;  
    rMouseY = mouseY;  
    sMouseX = mouseX;  
    sMouseY = mouseY;  
    for (int deviceID = 0; deviceID < numDevices; deviceID++){  
void testApp::draw(){  
	ofSetColor(255, 255, 255);  
ofImage myImage; //allocate space for variable  
myImage.loadImage("commercial-kitchen.jpg"); //allocate space in ram, decode jpg, load pixels.  
    for (int deviceID = 0; deviceID < numDevices; deviceID++){  
        // debug draw does the equicalent of the commented methods below  
   openNIDevices[deviceID].drawDepth(0, 0, 320, 260);  
     openNIDevices[deviceID].drawImage(1074,0, 320, 260);  
    openNIDevices[deviceID].drawSkeletons(0, 0, 320, 260);  
    // do some drawing of user clouds and masks  
    int numUsers = openNIDevices[0].getNumTrackedUsers();  
    for (int nID = 0; nID < numUsers; nID++){  
        ofxOpenNIUser & user = openNIDevices[0].getTrackedUser(nID);  
        ofTranslate(320, 240, -1000);  
        //tumble according to mouse  
      teaModel.setRotation(0, 90, 1, 0, 0);  
    teaModel.setRotation(1, 270, 0, 0, 1);  
      teapotModel.setRotation(0, 90, 1, 0, 0);  
    teapotModel.setRotation(1, 270, 0, 0, 1);  
      plateModel.setRotation(0, 90, 1, 0, 0);  
    plateModel.setRotation(1, 270, 0, 0, 1);  
        //tumble according to mouse  
      spoonModel.setRotation(0, 90, 1, 0, 0);  
    spoonModel.setRotation(1, 270, 0, 0, 1);  

Could I use these to control the objects if I place .x and .y instead of mouseX and mouseY

ofxOpenNIJoint rightHand = user.getJoint(JOINT_RIGHT_HAND);  
        ofPoint rightHandPoint = rightHand.getProjectivePosition();  
        ofVec2f newRightHandPosition = ofVec2f(rightHandPoint.x, rightHandPoint.y);  

Guys sorry to interrupt but i have small query on this code. I try to run this code every thing is ok except
here : “context.startPoseDetection(“Psi”,userId);” My IDE mention this as an error " The function startPoseDetection(String, Int) doesnt exist can you fix this error??