I’ve started a simpler addon for openNI, there are somethings in the current one that seemed too complex and it doesn’t use the new structures in OF007 like ofMesh for point clouds or ofPixels for pixel data. I’ve got most of the code from the current one, i’ll add proper credit in readme files or headers or whatever people prefer


The main changes:

  • Only 1 class to manage depth, rgb and ir (ir not implemented yet). I’ve tried to make the API more similar to ofxKinect

  • ofxOpenNIUser contains a vector of limbs instead of individual limbs by name so it’s easier to process in loops…

  • ofxOpenNIUser also contains the point cloud and the mask for that user in case it’s activated in the tracker

  • The internal structure of the tracker mantains a map of tracked users by id instead of always keeping the max number of users in an array

  • All the structures are using OF classes like ofPoint instead of XnVector3D, ofPixels instead of unsigned char*…

  • functions to go back and forth from xn to OF structures (by now only ofPoint) similar to kyle’s ofxCv toOf functions

  • There’s less pointers, everything is passed and returned by reference when possible

I’ve just started to work with it so probably it has some bugs but by now both live and recorded feeds, recording from live feed and tracking seem to be working properly

EDIT: It also runs in a thread so the app fps doesn’t get affected by the capture

Hey arturo,
If you need help to deal with OpenNI just ask. (I’ve invested a lot of time lately using it and getting to understand it).


I’d been making similar edits to the old ofxOpenNI (ofPixels, clearing bloat)

My current issue is that OpenNI.org has stopped hosting OpenNI binaries.
I noticed your libs folder has all the headers, but where are you getting your binaries from?
Including the binaries would be fantastic (although perhaps not legal for NITE)


EDIT: http://www.openni.org/Downloads/OpenNIModules.aspx
They were there after all. I presumed OpenNI modules was going to be some modules that plug into OpenNI

also i needed to run the install from the source rather than just copying the binaries or the capture reproduction won’t work. (for openNI, of course, for NITE there’s no source)

It’s here https://github.com/OpenNI/OpenNI
that link appears in the openNI.org site
look the attached image

Hi guys, so nice to check this new addon, that’s smart!
Apologies if you still are you thinking how to make it easier for every one, but I can’t wait to try it and I have no clear how to create the right xcode project with right libs setted or loaded…

dyld: Library not loaded: …/…/…/Bin/Release/libXnVCNITE_1_4_1.dylib

I’ve seen many ways at forum but still I’ve not succesfull and clear how to…

Looks great. I’m also having trouble getting an XCode project working around this… anyone have one lying around? I’ll post if I figure it out.

Ok got it working… I’ve attached the Xcode project in case it helps anyone else. Also wrote up some instructions. I’m starting from the absolute beginning, so apologies if they’re kind of verbose. I didn’t realize that I still had to install OpenNI despite having the dylibs binaries.

This is only tested with OpenNI2 + OF 007 Git (Master branch) + Xcode 4.2 + Mac OS 10.7.2.

Xcode project:


1. Install OF
I used the master branch from github.
[font=Courier]$ git clone https://github.com/openframeworks/openFrameworks.git[/font]

2. Install ofxOpenNI2
[font=Courier]$ git clone https://github.com/arturoc/ofxOpenNI2.git[/font]
Make sure it winds up in OF’s /addons folder.

3. Install OpenNI
I used the Binaries
Then go to OpenNIBinaries -> Unstable -> Mac
[font=Courier]$ sudo ./install.sh[/font]

4. Install NITE
OpenNI Compliant Middleware Binaries -> Unstable -> Mac
I used the Binaries
I replaced the headers and binaries inside /addons/ofxOpenNI2/libs/nite/incude and /addons/ofxOpenNI2/libs/nite/libs with the files from the more recent version of NITE downloaded above.
Use the license key. You can find it inside /addons/ofxOpenNI2/example/bin/data/openni/config/ofxopenni_config.xml
[font=Courier]$ sudo ./install.sh[/font]

5. Install SensorKinect Drivers
I used the binaries from the unstable branch.
[font=Courier]$ sudo ./install.sh[/font]

6. Install libusb
It looks for libusb in the macports folder
but I’ve sworn off macports. Unfortunately the default homebrew libusb install doesn’t work. (Claims wrong architecture, Brew install might be 64 bit.) But the libusb-freenect one does the trick.
[font=Courier]$ cd /usr/local/Library/Formula
$ curl --insecure -O "https://raw.github.com/OpenKinect/libfreenect/master/platform/osx/homebrew/libusb-freenect.rb"
$ brew install libusb-freenect[/font]

7. Install Xcode project
Drop the attached Xcode project folder in /apps/addonsExamples. It should just work, but brace for a bunch of warnings.

Works great so far! I really like the idea of a cleaned up interface to OpenNI.

Thanks kitschpatrol for this complete set of instructions!
I had problems at 5.-Install-SensorKinect-Drivers, I couldn’t find the install script, so I followed Readme instructions to build and install that:

Building Sensor:

  1. Go into the directory: “Platform/Linux-x86/CreateRedist”.
    Run the script: “./RedistMaker”.
    This will compile everything and create a redist package in the “Platform/Linux-x86/Redist” directory.
    It will also create a distribution in the “Platform/Linux-x86/CreateRedist/Final” directory.
  2. Go into the directory: “Platform/Linux-x86/Redist”.
    Run the script: “sudo ./install.sh” (needs to run as root)

The install script copies key files to the following location:
Libs into: /usr/lib
Bins into: /usr/bin
Config files into: /usr/etc/primesense
USB rules into: /etc/udev/rules.d
Logs will be created in: /var/log/primesense

To build the package manually, you can run “make” in the “Platform\Linux-x86\Build” directory.

Fixed: the install script it’s zipped in the Bin/ dir

And at 6. Install libusb does not exist the folder “/usr/local/Library/” so I could not do that step.

Fixed: I missed install-BREW, then also I had to move (to uninstall temporarily) Mac Ports folder to setup right Screw installation

This was tested with OpenNI2 + OF 007 Git (Master branch) + Xcode 3.2.3 + Mac OS 10.6.8.

Thanks, It works fine now

Hello, all.

@charli_e: I think that the install script it´s zipped in the Bin/ dir

@kitschpatrol: good tutorial. Thanks
I´m running it on snow leopard and it´s compiling just fine. First I have to move the libs to the data/openni/libs and data/openni/libs/openni in order to compile everything. The problem it´s that I have to wait lot´s of seconds to start woriking. And wile I wait I have this message on the console:

      192 INFO       OpenNI version is 1.3.4 (Build 3)-MacOSX (Oct 11 2011 10:48:24)  
      214 INFO       --- Filter Info --- Minimum Severity: ERROR  

Then after a some time appear the windows and everything works like a charm. With this message:

Setting resolution to VGA  
Warning: USB events thread - failed to set priority. This might cause loss of data...  
ofxOpenNI: OF_LOG_NOTICE: setup from XML status: OK   
ofxOpenNI: OF_VERBOSE: Creating depth generator   
ofxOpenNI: OF_VERBOSE: Creating device   
ofxOpenNI: OF_VERBOSE: Creating image generator   

This is normal? others implementations of openni wrappers don´t take so much. Any idea what is it?


Hey just found this while rewriting ofxOpenNI. Pretty much where I was heading so I’m borrowing some of the structure if that’s ok?

A couple of questions/comments:

ofxOpenNIUser contains a vector of limbs

Did you see any performance hit from using a vector instead of an array?

ofxOpenNIUser also contains the point cloud and the mask for that user

This is nice for when you are using tracking, although think I’ll add back in ‘raw’ masks and point clouds as they are handy for when you are not tracking ‘humans’ (ie. not skeleton/user tracking but just using the depth data to isolate objects etc)

The internal structure of the tracker mantains a map of tracked users by id

Haven’t checked this bit out yet, but if it works well I’ll convert to something like it for hand tracking…

There’s less pointers, everything is passed and returned by reference when possible

Yep, it was all unnecessarily complicated before, however in order to get multiple kinect devices to work I’m having to use pointers so I can dynamically allocate pixels, textures and boolean properties for any number of devices. This could be neater if it were possible to have separate contexts for each ‘device’ but it seems I need to have a single context with multiple generators/devices in order to get this working. Right now I’m testing with dynamically allocated tex,pix,etc in a single openNI.h wrapper, but this means there are nasty *'s everywhere. Maybe it will be easier to either: a) separate out the context; or b) essentially take what you and (now) I have in openNI.h and make an ofxOpenNIDevice…personally I don’t mind having all the pointers, but if you think it obfuscates the code maybe it’s better to make openNI.h just own the context and a series of ‘devices’ with the same methods and properties that openNI.h has right now? (Does that make sense?)

Finally I’m wondering why you are using swap’ing between currentDepthPixels <-> backDepthPixels and currentRGBPixels <-> backRBGPixels? Did you need a back buffer? Was there a problem with the threading on linux? I’m testing on a mac and everything works fine without the swap (ie.,no tearing or crashing with just a straight write to pixels in the thread and upload to texture in an update).

Does doing the swap mean that the image/depth drawn to screen is a frame behind the data? Or is my thinking all wrong about that?


Pretty much where I was heading so I’m borrowing some of the structure if that’s ok?

of course, no problem

Did you see any performance hit from using a vector instead of an array?

a vector is exactly the same as an array it depends on how you use it. coincidentally i just wrote an article about that in my blog: http://arturocastro.net/blog/2011/10/28/stl::vector/

This is nice for when you are using tracking, although think I’ll add back in ‘raw’ masks and point clouds as they are handy for when you are not tracking ‘humans’ (ie. not skeleton/user tracking but just using the depth data to isolate objects etc)

yes my though was that you could get the whole point cloud from ofxOpenNI, i just didn’t add it yet cause i didn’t need it

About the maps by id, the previous way was kind of messy there was always a user 0 no matter if there was no user tracked, and there were also an array with all the user created at all times + flags to signal if they were detected or not. This way there’s no need for that

About using several kinects, i haven’t got into that yet but i guess the context can be static or a singleton if there can only be one copy.

The back/front buffers is in effect for avoiding tearing, at that resolution it shouldn’t happen but better to be sure mostly when this images are going to be used for analysis more than just drawing them

This looks great! Just a quick thought, having glanced through the ofxOpenNI code a little some of the pointers for multiple Kinect devices might be a nice place to use the SharedPtr, like in the xn::Recorder*, etc?

it’s nice that this acts more like ofxKinect, but i’m really sad to see a duplication of effort since gameover has been working so hard on ofxOpenNi. it’s also confusing for new people who just want to use openni and aren’t sure which one is best.

is there some way you two, arturo + gameover, could figure out where to take the openni development next?

sorry to point out the elephant in the room :slight_smile:

haha :slight_smile: no, no problem, i started ofxOpenNI2 because i had an urgent project and there were somethings in ofxOpenNI that i didn’t like/understand so it was faster to rewrite it than trying to understand how it worked and fix it. But if gameover wants to maintain it, i’m totally cool with it.

I think the main points are what i told in the first post. Also i was realizing yesterday that the limbs only have position which is useful to draw them but the orientation of the joints is actually more useful in other cases, i’m working in that will push it when i understand how to transform it to an OF structure.

Anyway, gameover if you want to manually add the changes to ofxOpenNi, or if you prefer to add whatever you think is missing to ofxOpenNI2 and then merge it back to ofxOpenNI for me it’s no problem at all

Hey Arturo & Kyle my last post (in between yours) got swallowed by the spam bot apparently.

I’ll try again with less words :wink:

…in the end it’s the code and the feedback about what pattern/structure people find most useful is what makes me happiest!

On that note:

Last night I took Arturo’s approach and re-instated initialising generators from code (as opposed to init’ing from XML) and started testing for multiple Kinect compatibility - something I believe is very high on everyones list of functionality. I had to do a bunch of tests to figure out how this should work -> at first I thought there could be a separate xn::Context for each device, but it turns out this is not the case. However, with a single xn:Context I am getting really good performance (even on my Macbook Pro) which is exciting.

This poses an immediate problem of how to structure access to a single threaded xn::Context whilst maintaining a simple API for dealing with several Kinects simultaneously.

By far the easiest is to leave an xn::Context in a threaded openNI class, and then use a vector/array to store instances of an openNIDevice (or something similiar) which would look very much like what Arturo has got in openNI.h/.cpp.

This would lead to code like:

ofxOpenNI openNI;  
openNI.setup(); //or openNI.setupFromXML(someFile.xml); would only need a single setup for all devices  
openNI.getDevice(0).addDepth(); // or could use operator overloading of [] so that we can use  
openNI.getDevice(0).addImage(); // openNI[0].addDepth() - but then we lose code completion  
openNI.addDepth(0); // where the int is the device number???  
openNI.addImage(0); // this is how I did things last night for speed of development   
openNI.addDepth(1); // it has the advantage of being able to set a default value of 0 in all the normally  
openNI.addImage(1); //  accessed functions so that for single instances everything is invisible to the user, but is maybe a bit messy?  
openNI.update(); //we'd only need one update, not per device as we could track instances easily  
openNI.drawDebug(); // could draw all depth/image/ir/nodes etc  
openNI.getDevice(0).drawDepth(0,0,320,240); // class method to determine device  
openNI.drawDepth(0,0,320,240, 0 <- deviceID); // pass variable to determine device  
openNI.drawRGB(0,0,320,240, 1 <- deviceID);  
openNI[0].drawDepth(0,0,320,240); // operator overload to determine device  

However this leads to some (possibly) annoying nomenclature and departs from the ofVideoPlayer/VideoGrabber/SoundPlayer/etc model where we can just declare instances of the device/player class and use listDevices, setDeviceID and initKinect to allocate a device to each instance (as much as I’m not a fan of these method names they are common across a number of classes).

I’m assuming some people might prefer code that looks like:

ofxOpenNI openNI[2]; // or ofxOpenNI openNIA; ofxOpenNI openNIB; or ofxOpenNI * openNI; new ofxOpenNI[2]; etc  
opeNI[0].listDevices(); (or openNI[0].getDevices() - which I prefer but is not standard  
openNI[0].initKinect(...some arguments...width, height, fps, threaded, a bit mask of nodes to add????);   
openNI[0].setup(...args...); again I prefer this but it's not what we have in other classes, which to use?  
openNI[1].update(); // we'd need to call for each device with this method, even if the xn::Context is 'hidden' and threading in the background somewhere  

Feedback about this would most super helpful right now :slight_smile:


a vector is exactly the same as an array it depends on how you use it. coincidentally i just wrote an article about that in my blog: [http://arturocastro.net/blog/2011/10/28/stl::vector/

That article is really great -> I’ve been wanting to talk about vectors vs arrays for some time now. I’ve done some testing to compare based on your article. Results-and-code-are-in-a-new-thread…the gist is that I found everything you talk about in that article to be true until I get to allocations over a certain amount of bytes, then I get mixed results, with push_back starting to work faster than [] index access on vectors, and arrays out and out faster on multidimensional data (eg., like unsigned char[10][1920*1080*4] to store a bunch of pixel masks)…

The back/front buffers is in effect for avoiding tearing, at that resolution it shouldn’t happen but better to be sure mostly when this images are going to be used for analysis more than just drawing them

I’m not sure in this case it’s necessary. The command xn::Context::WaitAnyUpdateAll (or variant thereof) ensures that all data from the kinect finishes processing before the MetaData objects are (re-)populated with data. Therefore if we lock()/unlock() during the process of generating textures (rather than when swap’ing the pointers) we should have no risk of tearing…and essentially this is a double buffer already. Doing it again seems like it’s triple buffering!? Is there an efficiency issue for locking during MetaData retrieval?

My understanding (and I could be wrong here) is that if the lock/unlock is only around the swap, but the generation of image/depth pixels happens (without locks) as you have it, it’s possible for tearing to occur because the thread manages to fire twice (if that is even possible?). But then swap’ing the back buffer is just going to make a copy of the ‘torn’ pixels. Please let me know if I have that upside down.

My reason for asking on about it is that I am using this to do a lot of registered projection (where the image is re-projected back onto the moving subject) and any unnecessary double (or in this case triple?) buffering is going to add latency.

About multiple kinects, my idea is that there could be an internal private class to ofxOpenNI for which there’s only one instance (a singleton) that gets created with the first instance of ofxOpenNI. That class handles the threading and you can still create 1 ofxOpenNI per device. But i need to know how this works in more detail, it seems weird that there can only be 1 updateAndWaitAll, if you have several cameras that’ll be really slow

About the buffering you are right that there can still be tearing if the main thread is really slow, i need to add some code to optionally drop frames in that case but it’s way faster to lock only for the swapping (even if the updating of the texture is locking is only locking the swapping in the other thread) than locking the whole WaitAnyUpdateAll which will have the effect of reducing the main thread fps to that of the camera

I’ve just uploaded the changes for having orientation of the joints in the user + the general point cloud in ofxOpenNI

@gameover, what do you think if you create a refactor branch in your repo and i send you a pull request with the code i have right now in ofxOpenNI2? Then we can keep working from there. The only problem is this will break old code but we can keep a tag in your repo with the old version so old projects can have something to work with.

Hey that sounds great Arturo!

I’ve made two new branches and checked in some code I’ve derived from your initial commits. Feel free to work from that code or just issue a pull request any which way you think is best. If you want push rights I’m happy to do that too.

The two branches I checked in are:

‘refactor-multicontext’ (trying to use multiple devices with multiple contexts -> this is NOT working and is mostly there to see if someone can give me a sanity check…if people are more interested I have some very simple examples I’ve been using (ie., no classes just in a testApp) to try and figure this out


‘refactor-singlecontext’ (trying to use multiple devices with a single context -> this IS working and is probably where we should go from - your feedback would be great and if you prefer I can just make a blank refactor-arturo branch?

Essentially ‘refactor-singlecontext’ is your API style with some omissions and modifications:

  • I didn’t implement init from xml yet, instead
  • I implemented initialising devices from ‘code’ for multiple devices
  • I played around with lock/unlock positions -> I’m sure you know more about this but would be good to compare results and get your feedback on safety vs efficiency
  • I got rid of back buffer because I don’t think is necessary if we lock/unlock on GetMetaData(…) and generatePixels() calls as these operate like a back buffer already
  • I spent most of my time on getting multiple kinects to work -> although a lot of posts online seem to indicate that people have ‘attempted’ multiple kinects with multiple contexts I was unable to get this to work so far…the 2Real implimentation is using a single context…need to check what ROS are doing…anyway, single context seems to run plenty fast (though CPU overhead is high).
  • Depending on lock/unlock my 2010 MacBook Pro can get either 120-180 fps or 350-450 fps (application frame rate)

I’m kind of worried again about having a lot of threads discussing development (and then eventually seeking help with implementations)…should we stay discussing this here, or in the ofxOpenNI Development thread or does it not matter too much?

And finally a screen shot of double kinect trouble :slight_smile:

![](http://forum.openframeworks.cc/uploads/default/1934/Screen shot 2011-11-04 at 2.05.51 AM.png)

great, i guess i can download your single context branch and add the user tracking, it seems the only thing missing right?

About locking and the double buffering, the difference if you lock the whole generateBuffesr function against swapping the buffers is that the first will lock the main thread much more time than the second. That’s why you are getting 120fps when you lock. In my branch i get 370-400fps with locking. Also having 2 buffers there’s less chances that the data will get overwritten partially while you are working with it in the application where there’s no locks.