ofxOpenNI Development

@gameover are you still reading this thread?

anyway I have another bug:
updateGenerators should start with

if(bIsShuttingDown || bPaused || !bIsContextReady){  
		if(bIsThreaded && bUseSafeThreading) lock();  

the booleans are atomic so no need for locking. moreover i had a crash with the orginal code when bPaused was true, because the code tried to unlock the thread even when there was no lock set when safethreading was set to false.

@christophpacher: my many apologies for not being more active on ofxOpenNI over the last month - I have been relocating to Linz and quite busy with 0071 bugfixes/features. Are you using github? I think it would be really great if you could make a fork of ofxOpenNI and then contribute these issues and/or bugfixes via git - that way your code stays uptodate and I can pull or modify changes as you go? Also it makes it easier to keep all feature/bugfix requests on Github…


* Booleans for bPaused etc -> makes sense…just made that change but haven’t tested it yet
* RawDepthPixels should be OF_GRAYSCALE_IMAGE -> made that change a week or so ago
* bUseBackBuffer -> no this is the correct way around…in general if you’re using a backBuffer you draw/change pixels in the backBuffer and then during an update swap the backBuffer into currentBuffer and then draw it -> but when you turn off the buffering you just want to draw/change pixels in backBuffer and then update/draw that buffer -> I suppose I could change the if/swap logic and updateDepthPixels logic around so that currentPixels is always the one that get’s drawn, but effectively it is the same…
* re changes to openNI includes folders - yes I made some changes to the includes files -> these are only to allow compilation on mingw32/Code::blocks
* private vs virtual/protected - that’s a good idea! easiest for these kind of changes is to collaborate via git/github

I’m wondering if you would be into meeting up sometime and spending an afternoon making changes to the Experimental branch before merging it with Master? You seem to be working a lot with ofxOpenNI and I could do with your feedback before making the final merge. I think it’s time to do this, and since I am now living ‘down the road’ so to speak, it could make the process much faster…I can come to Vienna, or if you’d like to visit Linz/Ars then that could also be arranged…? PM me and we can hopefully find a date/time that works…

@kaykay: yes one day I will find the time to document ofxOpenNI but I was most concerned to finalize the API of the Experimental branch before beginning the process of documentation

@cgiles: could you let me know what platform? what version of OS? which branch? which IDE? The theo folder thing doesn’t mean anything…it looks like your problem could be related to this: https://github.com/gameoverhack/ofxOpenNI/issues/9

@irregular: yes it’s possible -> have a look at http://forum.openframeworks.cc/t/defining-a-hotspot-with-ofxopenni/8590/30

@kamend: ofxOpenNI experimental branch uses auto calibration - I have not yet implemented save/load calibration, but it’s on the todo list!

@everyone: ok…I probably missed some things - please let me know if you’re still stuck, or I didn’t address it in this ‘bulk’ response - I’m more able to be online now!!

@cgiles: you might also want to look at msg40929

@gameover yeah i avoided forking stuff on git until now, since i do not really like git. perhaps i will get to it to make this stuff easier. if you could write me a short guide how to fork and push changes and save me some time looking into the dreadful documentation of git, i would be greatful.
i would be down to meeting sometime. since its been a long time since i visited the ARS perhaps i even stop by in linz.

i just pulled the latest OF changes and you code does not compile anymore.
it comes down to the default parameters of eg

addHandFocusGesture(string niteGestureName, ofPoint LeftBottomNear = NULL, ofPoint RightTopFar = NULL);  

ofPoint cannot be converted to int anymore.

So that is what it’s called. Thank you very much!

@christophpacher: I resisted git for a while, but it is really handy for collaborating with other oF users/developers! Even if you just make issues on my github repo that would help me to track the problems and document changes etc…I wrote up some stuff on collaborating with git earlier in this same thread - hopefully they will get you up and running.

Let me know if there is a good time for us to work together - a date that is convenient to visit Linz - and/or I’ll let you know when next I’m coming to Vienna!

[quote=“gameover, post:101, topic:7403”]

So on that note we first need to get you up and running with git…I’m afraid I do not usethe macosx client. I only use the command line - at first I found it a bit intimidating but it’s really fairly straightforward for simple things…and the process of having to type the commands does sort of make sure you think about what each one is doing.

There is a good git tutorial on forking here: http://help.github.com/fork-a-repo/

It’s a bit off topic, but to summarise the most useful git commands for collaboration (this assumes you’ve gone to my github repo and clicked on ‘fork’ already:

>     cd ~/[pathtoyourof007]/addons  
>     git clone git@github.com:username/ofxOpenNI.git  
>     cd ofxOpenNI  
>     git remote add upstream git://github.com/gameoverhack/ofxOpenNI.git  

That last command tells your local copy of your fork of the repo where to pull changes from it’s upstream parent (ie., my repo)

And then when you want to get my most recent changes you do a:

>     git fetch upstream  

Which will fetch all changes to all branches, or if you want to be more specific you can do a:

>     git fetch upstream experimental  

It’s best to make changes on another branch rather than in the branch of an upstream repository. In other words don’t go making changes in the master, develop or experimental branches that get downloaded to your local repository, as then you may easily have to deal with a lot of conflicts every time I update my repo. Instead:

Firstly change to the experimental branch:

>     git checkout experimental  

then make a copy of this branch:

>     git checkout -b experimental-yourNameOrFeatureOrBugFix  

The ‘checkout -b’ command makes and checks out a branch all at the same time.

Now go ahead and make a changes.


>     git status  

To get a list of files you’ve changed and (if you added any files) any untracked files in your local branch.

If you added files you can do

>     git add .  

to add all files, or

>     git add fileName.ext  

to selectively add files.

When you’re happy with a change you do a:

>     git commit -a -m "Some description of what you changed"  

Then you can do a:

>     git push origin experimental-yourNameOrFeatureOrBugFix  

This will create and upload this branch to YOUR github repo.

Then you go to your github repo and click on pull-request. Make sure you change the branch you are making the pull request on to the correct branch. It will default to something like:

>     You're asking gameoverhack to pull 42 commits into master from experimental-yourNameOrFeatureOrBugFix  

Click on the bolded ‘master’ bit and type in ‘experimental’ on the left hand side to change to the right branch to make the pull request on.

Which should update the pull request to:

>     You're asking gameoverhack to pull 42 commits into experimental from experimental-yourNameOrFeatureOrBugFix  

If you haven’t made a pull request or finished your changes, but my experimental branch has been updated in the meantime, you do a:

>     git checkout experimental  
>     git fetch upstream experimental  
>     git checkout experimental-yourNameOrFeatureOrBugFix  
>     git merge experimental  

Basically these commands download any changes from my repo, change you into your branch and then tries to merge my latest changes with your branch. Hopefully you get a bunch of info about changes that have been made ending with:

>     Automatic merge succeeded  

If/when you get to needing to merge changes take a look at: http://book.git-scm.com/3-basic-branching-and-merging.html

Ok I think that’s enough for now…there’s lot’s of tutorials out there and there are some sightly more ‘safe’ ways to do the branching so that you always have a nicely merged version of code…but let’s see how you go with this first.

But essentially the idea is to avoid making changes on a local branch that is the same as an upstream branch name, and to make pull requests on either develop or experimental (depending on where you started making changes)…

I’m sure others can correct or elaborate on my git-lore :wink:

ok i will have look at it when it shows up.

what is your experience with ofxOpenNI and OF Framerates and disabling vertical sync? I just tested your orginal sample project and experience the same issues as I described here:

I just do not seem to get control over vertical sync and the framerate in my app. the ofxOpencv exemple works fine. so it seems to be a threading problem or an openni problem. Any ideas?

[quote=“christophpacher, post:164, topic:7403”]
@gameover yeah i avoided forking stuff on git until now, since i do not really like git. perhaps i will get to it to make this stuff easier. if you could write me a short guide how to fork and push changes and save me some time looking into the dreadful documentation of git, i would be greatful.[/quote]
git’s documentation is not that dreadful ;-): e.g. http://git-scm.com/
There’s a pretty encompassing write-up on the OF github wiki, check it out, pretty useful for workflow/PR questions, too: https://github.com/openframeworks/openFrameworks/wiki/openFrameworks-git-workflow
It also has many links to further information at the end.

I don’t know if I can ask you this, but I’m being around trying to understand how I can pose my questions in the
openframeworks forum (http://forum.openframeworks.cc/) without any luck.
I’m trying the ofxOpenNI exprimental, and I can’t get the depth (or mask) pixels anymore. Can you help me?
When I try to copy data to an ofxCvColorImage ou ofxCvGrayScaleImage I get strange results…

I would like to do things like:
cvImage.setFromPixels(image.getPixels(), 640, 480); ----> cvImage is an ofxCvColorImage and image is an ofxImageGenerator
Mat frame = toCv(cvImage); ----> and then convert to OpenCV Mat struct
cvtColor(frame, imageMat, CV_RGB2GRAY);

and this:

cvDepth.setFromPixels(depth.getDepthPixels(nt, ft), depth.getWidth(), depth.getHeight()); ----> cvDepth is an ofxCvGrayScaleImage and
depth is an ofxDepthGenerator
depthMat = toCv(cvDepth); ------> depthMat is an OpenCV Mat struct

I hope you cana help me, or if not can you tell me how I can post this doubt in the forum?
Best regards,

Paulo Trigueiros

@christophpacher: fixed experimental branch to work with v071 openFrameworks - thanks for pointing out the problem!

@paulo.trigueiros: thanks for taking the time to work out using the forum - it’s good to ask these questions publicly, as usually other people will also want to know the answer!

I made some updates to the src-UserAndCloud-Simple example that show exactly how to get hold of the user mask pixel and texture references. Please let me know if you can/not get the example to work…you will see that you can grab the mask pixels directly for each user or just use the textures directly (the internal method used to generate the mask textures occurs in the most optimal way - faster than creating textures by using the pixels in main application update or draw cycles).

I also added a simple hand tracking example.

I hope these examples help.

I wnat to be able to use the depth pixels as a mask to extract the hand region from the image.
On the previous version we used something like this:

cvDepth.setFromPixels(depth.getDepthPixels(nt, ft),
depthMat = toCv(cvDepth);
roiArea = cv::Rect(handBBox[i].x, handBBox[i].y, handBBox[i].width, handBBox[i].height);
roi = depthMat(roiArea);

being cvDepth a ofxCvGrayscaleImage
depthMat, roi and roiDepthTemp - OpenCV Mat

I used all depth pixels and extracted a ROI (area around the tracked hand), and then used that
for the hand mask with the grayscale image.

How can I do that with new version? I tried so many ways without no result ?!?!?!



Ok for non-user depthMaskPixels and depthMaskTextures (and pointClouds!) I’ve implemented a similar API as for users, hands, gestures etc. That is, you can have multiples of everything. You can also define a ROI for retrieving the depthMaskPixels/Textures.

There is also a dedicated ROI class for making ‘hot spots’ for triggering functions based on skeleton or depth point positions.

I haven’t got around to documenting or making examples for the depthThreshold and ROI parts of the API…but will try to add some in the next few days.

In the meantime have you looked at these functions?

// depth masks, pixels and point clouds (non-user)  
    void addDepthThreshold(ofxOpenNIDepthThreshold & depthThreshold);  
    void addDepthThreshold(int _nearThreshold,  
                           int _farThreshold,  
                           bool _bUseCloudPoint = false,  
                           bool _bUseMaskPixels = true,  
                           bool _bUseMaskTexture = true,  
                           bool _bUseDepthPixels = false,  
                           bool _bUseDepthTexture = false);  
    int getNumDepthThresholds();  
    ofxOpenNIDepthThreshold & getDepthThreshold(int index);  

To let’s say get all the depthPixels between z = 200 and z = 2000 you can:

ofxOpenNIDepthThreshold depthThreshold = ofxOpenNIDepthThreshold(200, 2000); // there are a lot more things you can do - look at the class def for ofxOpenNIDepthThreshold  

or more simply you can just say:


Once added, you can iterate, retrieve, modify, getMaskPixels, getMaskTexture, getPointCloud etc for every depthThreshold:

for(int i = 0; i < openNIDevice.getNumDepthThresholds(); i++){  
    ofxOpenNIDepthThreshold & depthThreshold = openNIDevice.getDepthThreshold(i);  
    // do something with depthThreshold.getMaskPixels() or getMaskTexture() etc etc  

I didn’t yet make it possible to delete a depthThreshold (oversight! will do it asap).

Since each ofxOpenNIDepthThreshold is updated internally on the thread and not during the application update (as this is the most efficient way to do it) you will need to be careful of the order of adding/modifying the depthThreshold to retrieve only the pixels/mask/texture around the hand. I have this sneaky suspicion you might always be one frame behind the hands position if you modify the ROI of each threshold when iterating the hands…hmmmm…I’ll need to try it out to really know…

I guess I could add asking for depthPixel ROI’s ‘on demand’ as you are doing with your code, but it’s quite inefficient - which is why I left it out/changed it - as it requires traversing the whole depth mask for each region every time you ask for the depthPixels, instead of setting up ROI’s and then checking each pixel in one traversal whether they are in any of the ROIs you are looking for…this way it can be done all at the same time as producing the colour histogram for the depth image, etc.

You could alternatively use getDepthRawPixels in much the same way as you were used to using getDepthPixels in the old API - but right now you would have to iterate the pixels yourself to set the ROI - basically that’s what the old getDepthThreshold(near, far) used to do.

It’s quite possible I might be misunderstanding what you want/need to achieve. It might help if you let me know what you need the application to do (rather than what code you used to use) - then I can suggest the best use of the API…or modify it if necessary!


Thanks for your precious help…
Yesm I have been looking for the ofxOpenNIDepthThreshold, but I couldn’t make work, or extract the depth points between two planes (near, far). I will have to take a look again, and I would appreciate your help…even with the ROI thing (how does it work!!!)
I want to define two planes in the depth image, and use that image to extract a ROI where the hand (position) is…
I use to do that as you say, with the getDepthThreshold(near, far) and with the obtained image extract the ROI with OpenCV.

I think the ofxOpenNI code is much cleaner now, and much better. Congratulations for your fantastic work.

I will give you some feedback as soon as possible.

No problem. Have been meaning to make some examples using the depth threshold and ROI functions…will try to do it as soon as possible.

Try just adding one depth threshold in your setup:


And then in draw or update try:

ofPixels & pix = openNIDevice.getDepthThreshold(0).getMaskPixels();  

You can modify the near/far distance with:

openNIDevice.getDepthThreshold(0).setNearThreshold(1000); // or whatever near threshold  
openNIDevice.getDepthThreshold(0).setFarThreshold(1200); // or whatever far threshold  

Obviously you could have n number of thresholds (like one for every hand) and try resetting the thresholds when you find a hand…

…but like I said I’m not sure if this will work exactly as expected because the actual mask pixels, texture and point cloud for each depth threshold are calculated and ‘cached’ in the worker thread (not when you ask for them with getDepthThreshold)…so you might end up getting a (slightly) incorrect depth region for that particular position…

Hand events might be a way to make this as fast as possible - ie., set the ROI’s or near/far thresholds when the event fires; collect the ROI/mask pixels during an update or draw…but again there might be latency - depends on how accurately you need it I guess…

What are you actually doing with these regions of depth pixels around the hand position?

What’s your end ‘product’ or goal?

I want to extract the hand from the gray image, and from the hand image extract features for hand gesture recognition.

I use both the binary image and the gray value image in my methods…

I need the hand regions as quickly as possible… :slight_smile:
In the older version sometimes I got flicker, and the depth image is not so good in terms of resolution :frowning:
Well, I’m going to try the way you told me too… and then I will send you some feedback

Thanks man,

Ok I pushed changes to the API and made a more advanced hand tracking example (src-HandTracking-Medium) that combines the hand tracker with depth thresholds. Pull the latest changes from github to try it out.

It seems to work pretty good…not sure if it’s a frame behind or not…but seems ok…

I think you’ll find the depth image is always the same 640 x 480 resolution regardless - until they release Kinect HD :wink:

And as to flicker - well that happens too.

I’m working on some de-flicker code at the moment - but no promises…

The whole thing could be implemented a little more efficiently internally for finger tracking purposes. Maybe in the future…

It would be great if you’d post any code you develop for finger tracking…I’m pretty sure some people got it happening with a convex hull method over here http://forum.openframeworks.cc/t/hand-tracking/850/15

…and you’ll find plenty if you search the forums for ‘finger tracking’.


Today I will try your code and see what I can do…
I have a finger tracking code already done that I used in an application for hand robot control…
I can give you the code…
I have to leave for the moment, but as soon as possible I will be in touch.
Good work! :slight_smile:


Something strange is happening ?!?!?!
Your software works great! That’s what I wanted.
When use the same piece of code on my software, it simple does not show the hand ROI ?!?!?
I don’t know the cause… maybe I’m just to tired and I can’t see the error in front of my eyes :slight_smile:
I will continue to try. Meanwhile, it continues to give the error on the method xnWaitAnyUpdateAll().
Do you known the cause?