ofxOpenNI Development

@briangibson:

Thanks for posting that code…hadn’t seen that snippet before. I’ve been working on multi/single kinect initialization based on a snippet from the openni forum (from round May 2011): http://openni-discussions.979934.n3.nabble.com/OpenNI-dev-Skeleton-tracking-with-multiple-kinects-not-solved-with-new-OpenNI-td2832613.html

The methods are basically the same:

* First init the context -> I’m not sure you can have multiple contexts…I’ll check into it, but I think the idea is that you have one context and multiple devices with multiple “production” nodes (ie., depth, image ir, audio generators)
* Then you enumerate the production tree either a) creating new nodes specified by the user “programmatically” (depth + image would be common, but could also be ir and/or audio) OR b) by executing a nominated XML config – I’ve tended to use the first method but openNI/NITE examples very often do the latter…quite often those examples (and indeed ofxOpenNI/gameoverhack) use a method like your second edit to check the tree for production nodes that already exist…essentially thats what I’m doing when ofxOpenNI starts up and spits out a message like “Looking for Depth Generator. None Found…creating!” but instead of stepping through the tree, the checks are done on initialization (setup) of depth, image, ir etc so that I was able to use the same code in the setup methods regardless of whether init is handled by executing an XML config or whether the nodes are programmatically requested. I think it’s good to support both methods of init’ing a kinect…

I think it would be good to restructure ofxOpenNI to handle methods of enumeration like this…a naming question:

* Should we make ofxOpenNIContext just a wrapper for the context? And create a new “manager” class called ofxONI (as per roxlu) or ofxOpenNIManager or just plain ofxOpenNI? that does all the work of enumerating devices and init’ing depth, ir, image nodes? or just make ofxOpenNIContext do this job? I’m leaning toward using ofxOpenNI (currently just an include .h file) as that should make it easier to not break existing code and looks more symmetric with ofxKinect…

on that note: @danomatika thanks for the links…will look at how you’ve been wrapping things…on the face of it I agree that either the openNI “manager” class should mirror ofxKinect or at the very least we should adopt/create an (extra?) interface class to allow for easier a/b comparisons…would be good to have other ofxKinect/ofxOpenNI “dual” user feedback on these issues…do other people feel strongly that we should mirror the ofxKinect API?

Here are some of my thoughts to open up discussion about sharing API styles:

* the underlying libraries for ofxKinect and ofxOpenNI work in quite different ways…there are going to be some fairly essential differences -> can we really make them interchangeable? And if they don’t completely swap in and out is it worth the time mirroring API style -> not all calls are easily made to work the same way…
* I remember when I was wrapping point cloud and depth pixel stuff that having the same API was going to make some function calls less efficient if I used exactly the same semantics so I couldn’t do it easily
* there are some methods I just plain don’t like the name of (eg., numConnected()) which I guess is no reason not to use them, just that it doesn’t feel in keeping with the overall style of everything else…
* there are a slew of methods ofxOpenNI needs which ofxKinect does not or are more in keeping with the underlying code (for instance openNI call “calibrating” the depth image to the rgb image “registration”, and ofxOpenNI methods reflect this difference in naming (on the basis that searching for help when developing code or looking for other examples using such a feature yield (more?) relevant results)…
* there are a number of methods that ofxKinect could adopt that ofxOpenNI is using (such as multiple depth pixel masks etc)…

@roymacdonald: indeed it looks like you’ve forked from roxlu and then copied in my include, lib and src files over the top of the “roxlu” ones…which is going to be a bit icky for a pull request on roxlu, as my changes become cut off from history and it’s hard to see where you’re code ends and mine begins. Although it’s some work I’m wondering if it would be easier to make a new fork from my repo and then add in your changes to that repo on a new branch called for eg., “feature_autoskeleton” or something like that? Then you could issue the pull against my repo and if you did any copy pasting of code that you’ve altered/corrected in files based on my src these will correctly come up in my fork as alterations to the latest code…

@gameover: I guess that the easiest way to handle my fork is to delete the actual one and begin a new one from your, then to that add my code, then request you a pull.

About ofxKinect: I haven’t used it so I cant tell very much about it. Yet I think we should not worry about making ofxOpenNI swapable with ofxKinect and mirroring its API style.

An overall manager for ofxOpenNI sounds good, so this handles anizialization and everything, so to make it easier to use.
for instance something like.

  
ofxOpenNI oni.init();  
oni.generateDepthImage();  
oni.generateRGBImage();  
oni.generateUsers();  
  
etc.  
  

naming should be both descriptive using complete words and not so long.

thats all for now.

@gameover: You can have multiple contexts if you are playing back a .oni recording in 1 context and receiving a live stream in another- I am doing that currently and it works fine. As a result, I assume you could have 2 contexts, 1 for each live kinect stream. Right now I have it so that my app is using 5 threads using ofThread:
1 thread for live stream retrieval, 1 for live post-processing,
1 for recorded stream retrieval, 1 for recorded postprocessing,
and main thread that only draws to GL.

this makes it a heck of a lot more efficient to draw multiple things at once to the screen. On the other hand, it has some issues when, if a context times out, the thread can get ‘stuck’. I will need to spend some time setting up Callbacks to detect and handle errors if using threads, i think. Thankfully the OpenNI API supplies mechanisms for creating callbacks.

Anyway, the multi-context thing may only make sense for certain use cases… I’m not totally sure what the best infrastructure is for multi-kinect, but it’s all I can come up with for being able to call context.WaitOneUpdateAll() and have it be non-blocking across all kinect input streams.

Hey Brian

Of course you can have multiple contexts - what was i thinking! - that’s exactly what I put in the example for playing and recording a stream…hadn’t thought about it for multiple devices as most of the code I can find is only init’ing one context and then multiple depth/image streams…

Interesting about the multi-thread thing…I need to play more with multiple to Kinects to really work out how best to handle them…I have another one on it’s way this week after an impulse ebay purchase :wink:

What kind of “post-processing” effects are you doing?

Also wondering if you have a github fork where you’re working on this code?

Check out this
http://forum.openframeworks.cc/t/2realkinectwrapper/7525/0
Sounds really interesting and it could be of great help for us.

BTW, so far no improvements on my side. been very busy. :frowning:

Hey Roy

Yep just been checking it out…definitely a great place for use to look, learn and compare!

Down deep the code structure looks very much like an updated ofxOpenNI :wink: …though they use a more complex API style than the oF “norm” (ie., bit flags, try…catch…exception, namespaces etc…all things that I use a lot in my personal project code but that are perhaps not beginner friendly…

I think we should go one layer more abstraction with this project…using those methods but keeping them wrapped up for the user (so they can use them if they know how, but don’t need to if they don’t understand) - I’ve been working towards an API where at it’s simplest you just say setup(), update(), draw() and voila your up and running…

I guess this is one of the key differences between developing ‘wrapper’ code for *any* c/c++ environment and developing specifically for an oF audience and compile/runtime environment. Eg., 2Real are using Boost threads and by the looks of it dealing exclusively with pixel data for depth/ir/image rather than directly using textures as that way their code is “agnostic” so to speak…

I’m wondering if people could give some input on general API style and internal structure for the “new” ofxOpenNI?

General things I’ve been thinking about:
* How oF specific to be? Currently ofxOpenNI is only using a couple of oF data types (ofVec3, ofTexture) and could easily be made to not rely on oF at all…should we go with an this “agnostic” approach? Or should we whole heartedly rely/leverage the “core”, ie., use ofThreads, ofEvents, ofSetColor, etc etc (I think this is especially interesting in light of topic,6891.60.html
* One thing I think that is really interesting about the 2Real code is the way they’re wrapping both openNI and the windows SDK…perhaps this approach could be used to wrap ofxKinect and ofxOpenNI (and even the Windows SDK)? Should we work toward that?
* How do we handle multi-kinect setups? Should the “end” user be iterating through devices to do draws, updates, get skeleton data? Or should this be handled entirely by a “manager” class? I’m trying to figure out a way where you could do both…but I’m not sure it’s worth it…? (this last is a little difficult for me at the moment as I’m coding blind, waiting for a second Kinect to arrive)
* Similar questions for depth, image, ir, user, hand generator “nodes” do we still want to be able to get access to these to instantiate a context->device->generator nodes or should it all just be wrapped in the “manager” class? Or both?

More specific things I’ve been thinking about:
* should we use a namespace?
* should we drop the ofxOpenNI prefix on file names and/or function names? (see discussion: topic,7386.new.html#new)
* should we use try/catch/throw exceptions to handle init’s etc or if/for/return true/false style checking?
* Should we use vectors or arrays or a combination of both to store things like device, image, depth, user, nodes and then point clouds, depth masks etc? [personally I really don’t like vectors in code that is speed (ie., draw) dependent, however it makes the implementation more difficult and the code tends toward write-only…]
* more?

I don’t have much time for coding this until after next Thursday (13/10/2011) as I have a show opening on that day. I’m aiming to put all these words (and any feedback I get by then) into action over that following weekend…including moving the repo etc. If pull requests are not in by that time I’ll do my best to incorporate them into a last update to the current form of ofxOpenNI before changing anything major so we still have a working version…

Might be nice to chat via skype with those who want a bigger role in design/maintenance on the 14th or 15th -> call me old school, but I find email and forum posts are no substitute for eyes and ears when it comes to collaboration :wink:

Hi all

So what happened with this? which is the latest version, gameover’s one?

Also in gameover’s github there’s 3 branches, master, develop and experimental. Should i be working with develop or experimental? i don’t mind if it’s a little unstable. I’m going to use this for a project so will probably contribute fixes and features.

Yes I think mine is the latest version. I emailed Roxlu (10/02) about moving to a separate github repo, but haven’t heard back from him. I’ll email again and see if it can happen ASAP.

Master & develop should be the same at this point (they are the versions I’ve been deploying for installation/performances) and are compiled against openNI 1.1.x.

Experimental is compiled against latest openNI 1.3.2.x but is only tested in of007 on Mac OSX.

I have a major festival launch on Thursday so major changes won’t happen till after 13/02.

If you’re using the Experimental branch I’d comment out any of the new code that reports node Capabilities as it slows down initialization a lot and I just put it in to see what new features were actually working with the latest drivers.

great, thanks, will go with development by now then and try experimental later

@gameover is 02/13 a typo? Or are you held up until February? Just curious.

lol! I’m definitely super busy, but no I meant 13/10…time to sleep :wink:

@gameover, I just forked your branch and I’m updating it with my stuff.
I’ll send you a pull request later.
BTW, the autoskeleton feature requires openNI 1.3.2. so Ill fork the experimental.

good luck with that festival!

mmh, i’m getting a crash, a segmentation fault when i activate the user tracking because GetHandler() in UserGenerator is returning a null pointer. i also see:

Create user generator failed: Can’t create any node of the requested type!

in the console when the program starts, i’m i missing something? i’ve installed the openNI library, the binaries and the SensorKinect.

@Arturo, which branch are you using?

master right now

it’s working for me at least on OSX.
which version of OpenNI do you have installed? remember that you also need to install NITE. the user tracking depends on NITE if I recall correctly.

that must be it, where can i download NITE?

mmh, ok is this it?

http://www.openni.org/downloadfiles/openni-compliant-middleware-binaries/34-stable

there doesn’t seem to be a linux version.

from OpenNI download section. Select OpenNi compliant middleware binaries, unstable and the build that suits your system

hey just booting into linux…forgive me 'nix is not my forte :wink:

ok master is working for me…ahhhhh…but not for long after a while it segments once it’s been tracking

Not sure it’s relevant but I’m in of062…haven’t upgraded my linux install to 007…also I only compiled on a 64 bit version of 10.10

Retrieve user generator failed: No match found
Creating user generator succeed: OK

Is what I get…I’m pretty sure I didn’t alter the drivers in any way, but will check this

Perhaps it is worth trying the new drivers with the experimental branch?

Re NITE: I used NITE-Bin-Linux64-v1.3.1.5.tar.bz2…I found a version here: https://bitbucket.org/kaorun55/openni-1.1.0.41/src/079995a3f04a/Linux%2064bit