ofxOpenNI Development

Opening this thread for discussion of developing the ofxOpenNI addon. It should hopefully replace the original OpenNI-Skeleton-Tracking thread.

At the end of that thread you can find discussions about merging the “Roxlu”, “gameoverhack” and other forks of the project. Diederick and I are hoping that by merging the forks we can provide a consistent and less confusing framework for using openNI drivers in openFrameworks.

In my mind the first thing to do is work out how to make this merge as smooth as possible and not lose all the work that many people are putting in on both forks.

Roxlu has asked github to move the repository to my github account - at first we weren’t sure if this was possible, but he’s received an email from github (forwarded to me yesterday) that indicates it is possible to move a repo if the github account it gets moved to is deleted first. It’s also possible to just move the root, but both these solutions have consequences for all users currently working on either of our forks: they’ll no longer have an upstream and any history of commits, wiki’s etc are lost on one or other account. See the github help page on moving-a-repo.

Although I’m happy for the repo to live in my github account, I’m wondering whether given the number of us developing features on these fork points (many of which are not yet merged upstream) - it might be better to move ofxOpenNI to it’s own github page? Let me know what people think - but my feeling is that this will give us the best “history” of development without breaking anyone’s upstream dependencies too much and allow both Roxlu and I to ultimately merge and close our forks.

My plan would be:

* Anyone working with either the “roxlu” or “gameoverhack” repo’s should issue pull requests on bug fixes or features on the appropriate fork (in particular I think it would be great if Roy could issue a pull request on Roxlu’s repo -> is that cool with you Roy?)

* Pull requests get merged on both forks. Roxlu has given me push permissions to his repo so I think I should be able to administer merge/pull requests on that fork now. I’ve got some open pull requests and a some messages from users about issues, so I’ll fix those up on my fork over the next few days too.

* I issue a pull request on Roxlu’s fork (and do a merge? or leave it till after we move the repo?)

* We move Roxlu’s repo to a new github repo, or to my github account (depending on what people want) and add anyone who wants to be an active developer to that account with Push or Admin privileges (me, Roy?, Roxlu? anyone else?)

* Then we merge the roxlu / gameoverhack forks, and/or re-design with all the relevant bits of code in one place.

How does that sound?

This is great!!

The truth is taht I don’t really know from which fork I forked my fork…
I think it is actually a fork from gameoverhack and not from roxlu, but due to my newbieness with github I forked the incorrect fork.
I’ll double check from where I forked,update that, and send the corresponding pull request.

I think that the best approach would be that ofxOpenNI have it’s own github page, like OpenFramework does, and then add admins and push privileges. I want to have those privileges BTW.

As dicussed before we should start this new version of OfxOpenNI from scratch, yet before doing so we should merge all the actual forks so to leave “final” version of the current ofxOpenNI, so people don’t get confused with all the diferent forks, and all the issues that aroused in the ofxOpenNI skeleton tracking thread.

Then we should begin designing the new implementation of ofxOpenNI.

So, for now I’ll check from where I forked my fork and compare roxlu’s and gameoverhack’s las versions and analyze how to merge both.

Let you know about improvements.

cheers!!!

I’m working on a multiple-kinect implementation. Maybe this will help you poke at it too, before i get a chance to release something proper. By enumerating through the kinect devices on your system, you can then create depth and image generators that belong to that particular device.

  
  
//taken from from [https://github.com/ros-pkg-git/ni/blob/master/openni-camera/src/openni-device.cpp](https://github.com/ros-pkg-git/ni/blob/master/openni-camera/src/openni-device.cpp)  
const char* ofxOpenNIContext::getSerialNumber ( const xn::NodeInfo& device_node_info_) const throw ()  
{  
	return device_node_info_.GetInstanceName ();  
}  
  
const char* ofxOpenNIContext::getConnectionString ( const xn::NodeInfo& device_node_info_) const throw ()  
{  
	return device_node_info_.GetCreationInfo ();  
}  
  
unsigned short ofxOpenNIContext::getVendorID (const xn::NodeInfo& device_node_info_) const throw ()  
{  
	unsigned short vendor_id;  
	unsigned short product_id;  
	unsigned char bus;  
	unsigned char address;  
	sscanf (device_node_info_.GetCreationInfo (), "%hx/%hx@%hhu/%hhu", &vendor_id, &product_id, &bus, &address);  
	  
	return vendor_id;  
}  
  
unsigned short ofxOpenNIContext::getProductID ( const xn::NodeInfo& device_node_info_) const throw ()  
{  
	unsigned short vendor_id;  
	unsigned short product_id;  
	unsigned char bus;  
	unsigned char address;  
	sscanf (device_node_info_.GetCreationInfo (), "%hx/%hx@%hhu/%hhu", &vendor_id, &product_id, &bus, &address);  
	  
	return product_id;  
}  
  
unsigned char ofxOpenNIContext::getBus ( const xn::NodeInfo& device_node_info_) const throw ()  
{  
	unsigned short vendor_id;  
	unsigned short product_id;  
	unsigned char bus;  
	unsigned char address;  
	sscanf (device_node_info_.GetCreationInfo (), "%hx/%hx@%hhu/%hhu", &vendor_id, &product_id, &bus, &address);  
	  
	return bus;  
}  
  
unsigned char ofxOpenNIContext::getAddress ( const xn::NodeInfo& device_node_info_) const throw ()  
{  
	unsigned short vendor_id;  
	unsigned short product_id;  
	unsigned char bus;  
	unsigned char address;  
	sscanf (device_node_info_.GetCreationInfo (), "%hx/%hx@%hhu/%hhu", &vendor_id, &product_id, &bus, &address);  
	  
	return address;  
}  
  
const char* ofxOpenNIContext::getVendorName ( const xn::NodeInfo& device_node_info_) const throw ()  
{  
	XnProductionNodeDescription& description = const_cast<XnProductionNodeDescription&>(device_node_info_.GetDescription ());  
	return description.strVendor;  
}  
  
const char* ofxOpenNIContext::getProductName ( const xn::NodeInfo& device_node_info_) const throw ()  
{  
	XnProductionNodeDescription& description = const_cast<XnProductionNodeDescription&>(device_node_info_.GetDescription ());  
	return description.strName;  
}  
  
  
  

and then in another method somewhere, to search for which kinects are connected to your machine…

  
  
		// lists of all available nodes:   
        static xn::NodeInfoList node_info_list;   
        static xn::NodeInfoList depth_nodes;   
        static xn::NodeInfoList image_nodes;   
        // enumerate all devices   
        XnStatus status = context.EnumerateProductionTrees(XN_NODE_TYPE_DEVICE, NULL, node_info_list);   
        if (node_info_list.Begin () != node_info_list.End ()) {   
			//there is a kinect connected.  
  
        }   
        else if (node_info_list.Begin () == node_info_list.End ()) {   
			printf("no kinects found");   
        }   
  
			int i = 0;  
        for (xn::NodeInfoList::Iterator nodeIt = node_info_list.Begin (); nodeIt != node_info_list.End (); ++nodeIt) {   
			const xn::NodeInfo& info = *nodeIt;   
			const XnProductionNodeDescription& description = info.GetDescription();   
			//OSX: compare and check with 'system_profiler SPUSBDataType' in terminal  
			printf("KINECT FOUND AT: string: %s, bus: 0x%x, vendorID: 0x%x, vendorName: %s, product ID: 0x%x, productName: %s, address: 0x%x, instance name: %s\n", getConnectionString(info), getBus(info), getVendorID(info), description.strVendor, getProductID(info), getProductName(info),  getAddress(info), getSerialNumber(info));  
  
			xn::Query query;  
			query.AddNeededNode(info.GetInstanceName());  
			XnStatus nRetVal = context.CreateAnyProductionTree(XN_NODE_TYPE_DEPTH, &query, depth->getXnDepthGenerator());  
			SHOW_RC(nRetVal, "Create Depth");  
			nRetVal = context.CreateAnyProductionTree(XN_NODE_TYPE_IMAGE, &query, image->getXnImageGenerator());  
			SHOW_RC(nRetVal, "Create Image");  
  
			i++;  
        }   
		cout << "KINECTS CONNECTED: " << i << endl;  
  

EDIT: aha! the last bit of code above should be how to initialize device-specific depth generators and image generators, when you have multiple kinects connected. will test it next week when i get my 2nd kinect, since i’ll be out of town til then. the important bit of code is
context.CreateAnyProductionTree(XN_NODE_TYPE_DEPTH, &query, depth->getXnDepthGenerator());
because it filters the production tree for that device-specific kinect. i’m pretty sure the ‘create any’ in CreateAnyProductionTree is a bit of a misnomer, because what they really mean is ‘create a specific kind of production tree’

this was discovered from digging around in code / forum posts / trying different things… and this code snippet in particular: http://openni-dev.googlegroups.com/attach/a7ed6eeca576d210/NiSimpleMultiDevices.cpp?view=1&part=2

  
			  
        status = context.EnumerateProductionTrees(XN_NODE_TYPE_DEPTH, NULL, depth_nodes);   
        if (depth_nodes.Begin () == depth_nodes.End ()) {   
			//there aint no depth nodes!  
        }  
		//there's at least one depth node, so you can enumerate through em  
		else if (depth_nodes.Begin () != depth_nodes.End ()) {   
			for (xn::NodeInfoList::Iterator nodeIt = depth_nodes.Begin (); nodeIt != depth_nodes.End (); ++nodeIt) {   
				const xn::NodeInfo& info = *nodeIt;   
				const XnProductionNodeDescription& description = info.GetDescription();   
				printf("DEPTH IMAGE FOUND AT: bus: 0x%x, vendorID: 0x%x, vendorName: %s, product ID: 0x%x, productName: %s, address: 0x%x, instance name: %s\n", getBus(info), getVendorID(info), description.strVendor, getProductID(info), getProductName(info),  getAddress(info), getSerialNumber(info));  
			} 			  
        }  
		  
		  
		status = context.EnumerateProductionTrees(XN_NODE_TYPE_IMAGE, NULL, image_nodes);   
        if (image_nodes.Begin () == image_nodes.End ()) {   
			//there aint no image nodes!  
        }  
		//there's at least one image node, so you can enumerate through em  
		else if (image_nodes.Begin () != image_nodes.End ()) {   
			for (xn::NodeInfoList::Iterator nodeIt = image_nodes.Begin (); nodeIt != image_nodes.End (); ++nodeIt) {   
				const xn::NodeInfo& info = *nodeIt;   
				const XnProductionNodeDescription& description = info.GetDescription();   
				printf("IMAGE IMAGE FOUND AT: bus: 0x%x, vendorID: 0x%x, vendorName: %s, product ID: 0x%x, productName: %s, address: 0x%x, instance name: %s\n", getBus(info), getVendorID(info), description.strVendor, getProductID(info), getProductName(info),  getAddress(info), getSerialNumber(info));  
			}  
        }  
  
  
  
  

the above code lets you then iterate through your device and image nodes respectively, if you’d like to, for whatever reason. EDIT: just received word from Primesense on the forums that EnumerateProductionTrees doesn’t return Existing nodes, just nodes that you COULD create. So, there’s a different method that you can use to find existing nodes, but this method will show you production nodes that are able to be created. you can view every device/generator production node in the context by using the following code:

  
  
	for(int i = 1; i < 13; i++){  
			XnStatus status = context.EnumerateProductionTrees((XnProductionNodeType)i, NULL, node_info_list);   
			if (node_info_list.Begin () == node_info_list.End ()) {   
				printf("no devices found: %i\n", i);   
			}  
			for (xn::NodeInfoList::Iterator nodeIt = node_info_list.Begin (); nodeIt != node_info_list.End (); ++nodeIt) {   
				const xn::NodeInfo& info = *nodeIt;   
				const XnProductionNodeDescription& description = info.GetDescription(); 				  
				string devicetype;  
				switch(i){  
					case 1:  
					devicetype = "XN_NODE_TYPE_DEVICE";  
					break;  
					case 2:  
						devicetype = "XN_NODE_TYPE_DEPTH";  
						break;  
					case 3:  
						devicetype = "XN_NODE_TYPE_IMAGE";  
						break;  
					case 4:  
						devicetype = "XN_NODE_TYPE_AUDIO";  
						break;  
					case 5:  
						devicetype = "XN_NODE_TYPE_IR";  
						break;  
					case 6:  
						devicetype = "XN_NODE_TYPE_USER";  
						break;  
					case 7:  
						devicetype = "XN_NODE_TYPE_RECORDER";  
						break;  
					case 8:  
						devicetype = "XN_NODE_TYPE_PLAYER";  
						break;  
					case 9:  
						devicetype = "XN_NODE_TYPE_GESTURE";  
						break;  
					case 10:  
						devicetype = "XN_NODE_TYPE_SCENE";  
						break;  
					case 11:  
						devicetype = "XN_NODE_TYPE_HANDS";  
						break;  
					case 12:  
						devicetype = "XN_NODE_TYPE_CODEC";  
						break;				  
				}  
				printf("devicetype %s, string: %s, bus: 0x%x, vendorID: 0x%x, vendorName: %s, product ID: 0x%x, productName: %s, address: 0x%x, instance name: %s\n", devicetype.c_str(), getConnectionString(info), getBus(info), getVendorID(info), description.strVendor, getProductID(info), getProductName(info),  getAddress(info), getSerialNumber(info));  
			}  
		}  
  

@briangibson,
this sound great.
Can you please upload all the code you are using, plus some examplo use.
I can’t implement it right now cause I have a lot of work to do for tomorrow.
I have 2 kinects so I can test it right away.

Cheers!

I’m preparing to leave for san francisco for a week so I won’t have much time to poke at this til I get back- but this is a quick way to enable multiple kinect support, i think, after looking at autoskeletonexample on your git, roy. The structure of this might be a mess, but this will at least let you test its functionality. be warned, there may be typos and dumb mistakes, i have not tried to compile this code, etc etc.

  1. create as many depth generators and image generator instances as you want, in your testApp. If 2 kinects, create 2 depth generators and 2 image generators, for instance.
  2. remove the depth_generator.Create() and image_generator.Create() calls from the ofxDepthGenerator::setup() and ofxImageGenerator::setup() methods. you’re going to effectively replace those momentarily…
  3. in your context, create a new setup method to call this:
  
  
  
//new multi-kinect context setup method! call it what you will...   
if (initContext()) {  
	addLicense("PrimeSense", "0KOIk2JeIBYClPWVnMoRKn5cdY4=");  
	// lists of all available nodes:   
	static xn::NodeInfoList node_info_list;   
	// enumerate all devices   
	XnStatus status = context.EnumerateProductionTrees(XN_NODE_TYPE_DEVICE, NULL, node_info_list);    
	if (node_info_list.Begin () == node_info_list.End ()) {   
		printf("no kinects found");   
	}   
	else{ //at least 1 kinect was found connected!  
		int i = 0;  
		for (xn::NodeInfoList::Iterator nodeIt = node_info_list.Begin (); nodeIt != node_info_list.End (); ++nodeIt) {   
			const xn::NodeInfo& info = *nodeIt;   
			const XnProductionNodeDescription& description = info.GetDescription();   
			//OSX: compare and check with 'system_profiler SPUSBDataType' in terminal  
			//printf("KINECT FOUND AT: string: %s, bus: 0x%x, vendorID: 0x%x, vendorName: %s, product ID: 0x%x, productName: %s, address: 0x%x, instance name: %s\n", getConnectionString(info), getBus(info), getVendorID(info), description.strVendor, getProductID(info), getProductName(info),  getAddress(info), getSerialNumber(info));  
			//the previous debug print statement is only usable if you create all those methods defined earlier in this forum thread  
			xn::Query query;  
			query.AddNeededNode(info.GetInstanceName());  
			testApp* myApp = (testApp*)ofGetAppPtr();  
			XnStatus nRetVal;  
			if(i == 0){ //kinect 0, initialize depth/image generators!  
				nRetVal = context.CreateAnyProductionTree(XN_NODE_TYPE_DEPTH, &query, myApp->depthGenerator0.getXnDepthGenerator());  
				SHOW_RC(nRetVal, "Create Depth");  
				nRetVal = context.CreateAnyProductionTree(XN_NODE_TYPE_IMAGE, &query, myApp->imageGenerator0.getXnImageGenerator());  
				SHOW_RC(nRetVal, "Create Image");  
				myApp->depthGenerator0.setup();  
				myApp->imageGenerator0.setup();  
			}  
		  
			else if(i == 1){//kinect 1, initialize depth/image generators!  
				nRetVal = context.CreateAnyProductionTree(XN_NODE_TYPE_DEPTH, &query, myApp->depthGenerator1.getXnDepthGenerator());  
				SHOW_RC(nRetVal, "Create Depth");  
				nRetVal = context.CreateAnyProductionTree(XN_NODE_TYPE_IMAGE, &query, myApp->imageGenerator1.getXnImageGenerator());  
				SHOW_RC(nRetVal, "Create Image");  
				myApp->depthGenerator1.setup();  
				myApp->imageGenerator1.setup(); //EDIT- fixed typo.  
			}  
			i++;  
		}	   
		cout << "KINECTS CONNECTED: " << i << endl;  
	}  
}  
  

  1. delete the depth / image _generator.setup() calls from testApp.setup()

@briangibson
Great!
I’ll give it a try tomorrow and let you know.

Good luck in SF and thanks for this code.

:slight_smile:

cool. sounds good. I wonder if it makes more sense to have each kinect in its own context instance, though, so you can better control how and when each context’s WaitOneUpdateAll() gets called, and put each of those on its own thread per-kinect, as retrieving images from the camera tends to be a CPU hog. Updating each generator individually/manually via WaitAndUpdateData() is not really recommended in the manual, and WaitOneUpdateJustTheGeneratorsISpecify() does not exist, so the only reasonable alternative is to make multiple contexts for multithreaded use, 1 per kinect. That’s something I can’t test without a 2nd kinect tho.

It makes a lot of sense what you’ve said.
If you can implement that it would be great.

cheers!

[quote=“roymacdonald, post:2, topic:7403”]The truth is taht I don’t really know from which fork I forked my fork…
I think it is actually a fork from gameoverhack and not from roxlu, but due to my newbieness with github I forked the incorrect fork.[/quote]
If this is still unknown, it’s easy to check the fork hierarchy in github by looking at the Network ->members view: https://github.com/roxlu/ofxOpenNI/network/members
Your fork was from roxlu.

I did some work with it for our class here at CMU: https://github.com/danomatika/ofxOpenNI/tree/experimental

There might be some changes that are useful.

Also, as I suggested in an-issue, it would super awesome if the interface would work like ofxKinect, so that you could easily swap code. ofxOpenNI should then just add some extra functions to get the skeleton, etc.

This would make things alot easier for new users and those of us who are lazy and like compatibility. It would only take a high level wrapper around the current ofxOpenNI …

I know that page from github, and I also knew that I forked roxlu’s fork, but then I replaced almost all the code of it with the one I already had written (and this is where I’m guessing) that was derived from gameoverhack (guess. almost sure). So actually my fork is incorrecly forked from roxlu.
I apologize for my github newbieness.

I see. Not many people know this, but in compare view you can also compare across repositories like this:
compare your master branch with gameoverhack’s master branch:
https://github.com/roymacdonald/ofxOpenNI/compare/master…gameoverhack:master
then you can compare with roxlu’s master and see which has less differences. you can also compare between different branches.
That way it’s easy to find out which one is most similar to yours.

@briangibson:

Thanks for posting that code…hadn’t seen that snippet before. I’ve been working on multi/single kinect initialization based on a snippet from the openni forum (from round May 2011): http://openni-discussions.979934.n3.nabble.com/OpenNI-dev-Skeleton-tracking-with-multiple-kinects-not-solved-with-new-OpenNI-td2832613.html

The methods are basically the same:

* First init the context -> I’m not sure you can have multiple contexts…I’ll check into it, but I think the idea is that you have one context and multiple devices with multiple “production” nodes (ie., depth, image ir, audio generators)
* Then you enumerate the production tree either a) creating new nodes specified by the user “programmatically” (depth + image would be common, but could also be ir and/or audio) OR b) by executing a nominated XML config – I’ve tended to use the first method but openNI/NITE examples very often do the latter…quite often those examples (and indeed ofxOpenNI/gameoverhack) use a method like your second edit to check the tree for production nodes that already exist…essentially thats what I’m doing when ofxOpenNI starts up and spits out a message like “Looking for Depth Generator. None Found…creating!” but instead of stepping through the tree, the checks are done on initialization (setup) of depth, image, ir etc so that I was able to use the same code in the setup methods regardless of whether init is handled by executing an XML config or whether the nodes are programmatically requested. I think it’s good to support both methods of init’ing a kinect…

I think it would be good to restructure ofxOpenNI to handle methods of enumeration like this…a naming question:

* Should we make ofxOpenNIContext just a wrapper for the context? And create a new “manager” class called ofxONI (as per roxlu) or ofxOpenNIManager or just plain ofxOpenNI? that does all the work of enumerating devices and init’ing depth, ir, image nodes? or just make ofxOpenNIContext do this job? I’m leaning toward using ofxOpenNI (currently just an include .h file) as that should make it easier to not break existing code and looks more symmetric with ofxKinect…

on that note: @danomatika thanks for the links…will look at how you’ve been wrapping things…on the face of it I agree that either the openNI “manager” class should mirror ofxKinect or at the very least we should adopt/create an (extra?) interface class to allow for easier a/b comparisons…would be good to have other ofxKinect/ofxOpenNI “dual” user feedback on these issues…do other people feel strongly that we should mirror the ofxKinect API?

Here are some of my thoughts to open up discussion about sharing API styles:

* the underlying libraries for ofxKinect and ofxOpenNI work in quite different ways…there are going to be some fairly essential differences -> can we really make them interchangeable? And if they don’t completely swap in and out is it worth the time mirroring API style -> not all calls are easily made to work the same way…
* I remember when I was wrapping point cloud and depth pixel stuff that having the same API was going to make some function calls less efficient if I used exactly the same semantics so I couldn’t do it easily
* there are some methods I just plain don’t like the name of (eg., numConnected()) which I guess is no reason not to use them, just that it doesn’t feel in keeping with the overall style of everything else…
* there are a slew of methods ofxOpenNI needs which ofxKinect does not or are more in keeping with the underlying code (for instance openNI call “calibrating” the depth image to the rgb image “registration”, and ofxOpenNI methods reflect this difference in naming (on the basis that searching for help when developing code or looking for other examples using such a feature yield (more?) relevant results)…
* there are a number of methods that ofxKinect could adopt that ofxOpenNI is using (such as multiple depth pixel masks etc)…

@roymacdonald: indeed it looks like you’ve forked from roxlu and then copied in my include, lib and src files over the top of the “roxlu” ones…which is going to be a bit icky for a pull request on roxlu, as my changes become cut off from history and it’s hard to see where you’re code ends and mine begins. Although it’s some work I’m wondering if it would be easier to make a new fork from my repo and then add in your changes to that repo on a new branch called for eg., “feature_autoskeleton” or something like that? Then you could issue the pull against my repo and if you did any copy pasting of code that you’ve altered/corrected in files based on my src these will correctly come up in my fork as alterations to the latest code…

@gameover: I guess that the easiest way to handle my fork is to delete the actual one and begin a new one from your, then to that add my code, then request you a pull.

About ofxKinect: I haven’t used it so I cant tell very much about it. Yet I think we should not worry about making ofxOpenNI swapable with ofxKinect and mirroring its API style.

An overall manager for ofxOpenNI sounds good, so this handles anizialization and everything, so to make it easier to use.
for instance something like.

  
ofxOpenNI oni.init();  
oni.generateDepthImage();  
oni.generateRGBImage();  
oni.generateUsers();  
  
etc.  
  

naming should be both descriptive using complete words and not so long.

thats all for now.

@gameover: You can have multiple contexts if you are playing back a .oni recording in 1 context and receiving a live stream in another- I am doing that currently and it works fine. As a result, I assume you could have 2 contexts, 1 for each live kinect stream. Right now I have it so that my app is using 5 threads using ofThread:
1 thread for live stream retrieval, 1 for live post-processing,
1 for recorded stream retrieval, 1 for recorded postprocessing,
and main thread that only draws to GL.

this makes it a heck of a lot more efficient to draw multiple things at once to the screen. On the other hand, it has some issues when, if a context times out, the thread can get ‘stuck’. I will need to spend some time setting up Callbacks to detect and handle errors if using threads, i think. Thankfully the OpenNI API supplies mechanisms for creating callbacks.

Anyway, the multi-context thing may only make sense for certain use cases… I’m not totally sure what the best infrastructure is for multi-kinect, but it’s all I can come up with for being able to call context.WaitOneUpdateAll() and have it be non-blocking across all kinect input streams.

Hey Brian

Of course you can have multiple contexts - what was i thinking! - that’s exactly what I put in the example for playing and recording a stream…hadn’t thought about it for multiple devices as most of the code I can find is only init’ing one context and then multiple depth/image streams…

Interesting about the multi-thread thing…I need to play more with multiple to Kinects to really work out how best to handle them…I have another one on it’s way this week after an impulse ebay purchase :wink:

What kind of “post-processing” effects are you doing?

Also wondering if you have a github fork where you’re working on this code?

Check out this
http://forum.openframeworks.cc/t/2realkinectwrapper/7525/0
Sounds really interesting and it could be of great help for us.

BTW, so far no improvements on my side. been very busy. :frowning:

Hey Roy

Yep just been checking it out…definitely a great place for use to look, learn and compare!

Down deep the code structure looks very much like an updated ofxOpenNI :wink: …though they use a more complex API style than the oF “norm” (ie., bit flags, try…catch…exception, namespaces etc…all things that I use a lot in my personal project code but that are perhaps not beginner friendly…

I think we should go one layer more abstraction with this project…using those methods but keeping them wrapped up for the user (so they can use them if they know how, but don’t need to if they don’t understand) - I’ve been working towards an API where at it’s simplest you just say setup(), update(), draw() and voila your up and running…

I guess this is one of the key differences between developing ‘wrapper’ code for *any* c/c++ environment and developing specifically for an oF audience and compile/runtime environment. Eg., 2Real are using Boost threads and by the looks of it dealing exclusively with pixel data for depth/ir/image rather than directly using textures as that way their code is “agnostic” so to speak…

I’m wondering if people could give some input on general API style and internal structure for the “new” ofxOpenNI?

General things I’ve been thinking about:
* How oF specific to be? Currently ofxOpenNI is only using a couple of oF data types (ofVec3, ofTexture) and could easily be made to not rely on oF at all…should we go with an this “agnostic” approach? Or should we whole heartedly rely/leverage the “core”, ie., use ofThreads, ofEvents, ofSetColor, etc etc (I think this is especially interesting in light of topic,6891.60.html
* One thing I think that is really interesting about the 2Real code is the way they’re wrapping both openNI and the windows SDK…perhaps this approach could be used to wrap ofxKinect and ofxOpenNI (and even the Windows SDK)? Should we work toward that?
* How do we handle multi-kinect setups? Should the “end” user be iterating through devices to do draws, updates, get skeleton data? Or should this be handled entirely by a “manager” class? I’m trying to figure out a way where you could do both…but I’m not sure it’s worth it…? (this last is a little difficult for me at the moment as I’m coding blind, waiting for a second Kinect to arrive)
* Similar questions for depth, image, ir, user, hand generator “nodes” do we still want to be able to get access to these to instantiate a context->device->generator nodes or should it all just be wrapped in the “manager” class? Or both?

More specific things I’ve been thinking about:
* should we use a namespace?
* should we drop the ofxOpenNI prefix on file names and/or function names? (see discussion: topic,7386.new.html#new)
* should we use try/catch/throw exceptions to handle init’s etc or if/for/return true/false style checking?
* Should we use vectors or arrays or a combination of both to store things like device, image, depth, user, nodes and then point clouds, depth masks etc? [personally I really don’t like vectors in code that is speed (ie., draw) dependent, however it makes the implementation more difficult and the code tends toward write-only…]
* more?

I don’t have much time for coding this until after next Thursday (13/10/2011) as I have a show opening on that day. I’m aiming to put all these words (and any feedback I get by then) into action over that following weekend…including moving the repo etc. If pull requests are not in by that time I’ll do my best to incorporate them into a last update to the current form of ofxOpenNI before changing anything major so we still have a working version…

Might be nice to chat via skype with those who want a bigger role in design/maintenance on the 14th or 15th -> call me old school, but I find email and forum posts are no substitute for eyes and ears when it comes to collaboration :wink:

Hi all

So what happened with this? which is the latest version, gameover’s one?

Also in gameover’s github there’s 3 branches, master, develop and experimental. Should i be working with develop or experimental? i don’t mind if it’s a little unstable. I’m going to use this for a project so will probably contribute fixes and features.