[ofxOpenNI + ofxHandGenerator] mask an xnContext

hello
i’m using ofxOpenNI addons for han recognition.
It works quite well… sometimes it does good jobs, sometimes it does not recognize hand, sometimes it recognize some hands in the background.

now, i have an ofxCvGrayscaleImage with pixel mapped with far and near planes, and with this i can easily select a range of video with hands, so i have a good greyscale image with only hands.

my idea is to apply this image as a mask to the XnContext for make hand detection only in this region.
is it possible in some ways?
can you give me some advice?

here is a screen shot of my greyscale image… it is almoust perfect for hand recognition!

Hi,
I used the method that you describe but the false posities were a real pain, specially in uncontroled environments.
I solved it using a diferent method. I implemented an automatic skeleton detection, with no pose, and took the hand coordinates from the skeleton, so there were almost no false positives. Then just letting pass the pixels within a certain range from the hand position made the mask.
My version of ofxOpenNI with autoskeleton can found in my github. https://github.com/roymacdonald/ofxOpenNI

cheers!

ok so i am not alone with false positive! this makes me a little better… : )

Then just letting pass the pixels within a certain range from the hand position made the mask.

sorry i did have not understand this!

me neither :stuck_out_tongue:
Forget about that. That was to get a mask from the hand so you could isolate it, but it’s not what you need.
Just use the hand position from the skeleton data. the addon won’t give you the skeleton data just out of the box, there’s no method for that but is really easy to implement.
It should be something like

  
  
for(int i=0;i<yourUserGenerator.getNumberOfTrackedUsers(); i++){  
yourUserGenerator.getTrackedUser(i)->left_lower_arm.position[1].x;   
}  
/*left hand x cordinate. Replicate for the other hand and coordinates. do whatever you want with this data*/  
  

hope it helps.
best!

sorry
i’m usign your code with your example. it works.
but if i add

  
            for(int i=0;i<mUser.getNumberOfTrackedUsers(); i++){    
                cout << "user "<<i<< mUser.getTrackedUser(i)->left_lower_arm.position[0].X <<endl;;     
                //cout << "user "<<i<< mUser.getTrackedUser(i)->left_lower_arm.position[1].X <<endl;;     
            }    

inside your example, at line 50, after:

  
if (isMasking) {  
			allUserMasks.setFromPixels(mUser.getUserPixels(), mUser.getWidth(), mUser.getHeight(), OF_IMAGE_GRAYSCALE);  
			user1Mask.setFromPixels(mUser.getUserPixels(1), mUser.getWidth(), mUser.getHeight(), OF_IMAGE_GRAYSCALE);  
			user2Mask.setFromPixels(mUser.getUserPixels(2), mUser.getWidth(), mUser.getHeight(), OF_IMAGE_GRAYSCALE);  
  

… i get a segmentation fault

Does the segmentation fault arise while there’s a tracked user?
check this

  
  
                for(int i=0;i<mUser.getNumberOfTrackedUsers(); i++){      
                          ofxTrackedUser * user =mUser.getTrackedUser(i);  
                          if(user!= NULL && user->left_lower_arm.found){  
                    cout << "user "<<i<< mUser.getTrackedUser(i)->left_lower_arm.position[0].X <<endl;       
                    //cout << "user "<<i<< mUser.getTrackedUser(i)->left_lower_arm.position[1].X <<endl;    
}     
                }      
  

user is not null but if i try to read the var ‘found’ in user->left_lower_arm.found i get seg fault.
this is the output:

New User 1
/home/alberto/Software/of/apps/myApps/hands_and_skelton/bin/data/UserCalibration.bin
Auto Skeleton data loaded OK. User: 1
user is not null!
Segmentation fault

your example works, if i stand up in front of the kinect my body is corretly masked…
and i have copied UserCalibration.bin from your example…

EDIT:
i found that the problem is inside ofxTrackedUser.cpp, here:

  
XnSkeletonJointPosition a,b;  
user_generator.GetSkeletonCap().GetSkeletonJointPosition(id, rLimb.start_joint, a);  
user_generator.GetSkeletonCap().GetSkeletonJointPosition(id, rLimb.end_joint, b);  
if(a.fConfidence < 0.3f || b.fConfidence < 0.3f) {  
    cout << "dudee" << endl;  
    rLimb.found = false;   
    return;  
}  

everytime my kinect recognize a skelton there are a lot of dudee printed out in consolle.

i tried to download and compile some codes founded on the net for generating UserCalibration.bin but i always get this error:

userGen.Create : Can’t create any node of the requested type!
Segmentation fault

Hi,
strange I’ve never have had a seg fault in this part.
I’ll check it later this week.
Did you try adding breakpoints to isolate the section thet throws the segfault?

BTW, my implementation of ofxOpenNI there’s a method within ofxUserGenerator that allows you to save UserCalibration. bin files. Yet you will need to tweak a bit some of its methods for it to work.
Mainly deactivate the autoskeleton feature and add a saceCalibration call once the user has posed and been calibrated.
I’ll include this on an update later this week.
Good luck!

ok thanks : )
but are you testing it also under linux?

No, I’m using OSX.
That might be the problem, maybe there’s a bug in OpenNI in Linux.
Can you please provide me with the exact code that you are using that throws the segfault, please?

best!

The Linux bug has to come from this trick us Linux users have to do to get this to compile. Would editing the xnStatus.h functions cause the !=NULL exception?

http://forum.openframeworks.cc/t/ofxopenni-linux/5578/11

  
  
    for(int i=0;i<mUser.getNumberOfTrackedUsers(); i++)  
    {  
        cout << "tracked user: " << i << endl;  
  
        ofxTrackedUser * user = mUser.getTrackedUser(i);  
  
        //if(user!= NULL)           
        //if(user!= NULL && user->left_lower_arm.found)  
        //if(user->left_lower_arm.found)  
        {  
            cout << "success!" << endl;  
            cout << "user "<<i<< mUser.getTrackedUser(i)->left_lower_arm.position[0].X <<endl;  
        //cout << "user "<<i<< mUser.getTrackedUser(i)->left_lower_arm.position[1].X <<endl;  
        }  
    }  
  

When using the above 3 if statements, the first two never return a true value, and the 3rd one will always segfault.

Even if stick the above code inside the key pressed section and only execute it after a few seconds of acquiring the user.

Am I going to have to program a series of nested getFunctions() to get the position information out of this object?

there’s a good chance that editing the XnStatus.h file and not doing so with XnStatus.cpp will make some problems to arise.
Are you using the latest version of OpenNI? stable or unstable?
you might want to try compiling the gihub repo https://github.com/OpenNI/OpenNI

A workaround without nested get() would be to modify ofxTrackedUser and make it to publish the bone positions. try using ofListener to do so.
It should be very straightforward.
Check this slideshow where ofListeners are explained.
http://www.slideshare.net/roxlu/openframeworks-007-events
you’ll need to create a custom event class. It’s explained towards the end of the slideshow.

Anyways, I’d recomend you to fix the “hack” you’ve done 'cause that can lead you to many other problems later.

best!

You know what I need?

A mac.

figured it out!

The userGenerator wants a 1-indexed call, I was passing it the zero index call and I was getting a NULL return. Actually reading getTrackedUser() made it clear. I love coding.

yeah ok! with 1-indexed it works fine.
but… why is it 1-indexed?!?

Just because the first user is 1 and not 0.

yeah ok! thanks!
another thing: now i need the “world coordinate” (i think this is the right words): i need to know if the user is moving his hand also on z axis.
i’ve seen that xn code first of all finds the point in this coordinate system and then it transform it into screen coordinates, right? i could just modify a bit the class and save the points also before the transformation…
has it some sense? have you ever thought something similar?

Yes, it works that way but it can be much easier than that to get the z axis of the hand.
Just get the pixel value of the depth image at the point where the hand is.
I did this for an app I bult some time ago and worked perfectly.

good luck!