Kinect + oF + Unity

I’m looking for some general guidance on what I would need to connect these three technologies: Kinect, openFrameworks, Unity.

The idea is that the Kinect will capture visual data, an openFrameworks app will process that information and send usable data to a Unity game. The user could jump in real life and the game character would jump also. Or the user tilts his arms like airplane wings and the character is guided left/right ingame.

I have seen topics on the forum, but I’m having a hard time understanding what plugins/services I really need. I have seen mention ofxKinect, ofxOpenNi, ofxOsc, Syphon, etc.

Any insight is appreciated!

  • Gabe

I started a bit on something similar. My approach was

Openframeworks
ofxOpenNi to get the skeleton
Send the skeleton data (ofxLimbs) out via OSC

My plan was do tie it into Unity similar to what James George did with OpenTSPS and Unity. you can see a demo and get source here:
https://github.com/obviousjim/OpenTSPS-Unity3d-Extension
http://vimeo.com/17177478

There is a Unity/OpenNi direct communication but I believe it is Windows only
https://github.com/OpenNI/UnityWrapper

Hmm, maybe what I’m trying to accomplish is simpler than that.

http://youtu.be/SEQ8kU4aOxg

I’m not trying to get exact user limb locations into Unity, only basic commands, like in the above video. So when the user jumps, my OF app could tell Unity to have the game character “jump” and pass a value that stands for how high.

Could I accomplish that using only OF, ofxOsc, and Unity? Will ofxOsc allow me to somehow pass commands to Unity?

I have actually done a few projects that do what you want

This shows people holding there arms out and controlling an airplane
[flash=200,200]http://vimeo.com/23671274[/flash]

And this is a gesture controlled interface
[flash=200,200]http://vimeo.com/21732629[/flash]

For the airplane project I just tracked the players hands and sent those two hand points to unity using TUIO. Within unity I just did some calculation for the plane rotation based on the two points sent to it.

For the gesture interface I based my code off TuioKinect. I heavily modified it and also added ofxSyphon. I used Syphon to send the video data and tuio to send the tracking point data.

I recommend looking into TuioKinect and ofxSyphon and going through as many examples you can find. Also consider what the project is for. Do you really need to use a kinect? can you instead use blob tracking with opencv?

I ask these questions because if you decide to really take advantage of the kinect eg. body tracking. Consider the time it takes the kinect to calibrate a user. Most of the time people think your project is not working even though it is just calibrating

az_rr, that is some impressive work, thank you for posting.

I now have the TuioKinect app and I can see it communicating with the sample Unity project that comes with uniTuio, so that is impressive progress!

The issue that I’m seeing now is that the user has to stand at a very specific depth and then hold their hands out in front of them. When you move just a little bit too close the hand detection stops working because your legs/torso start being tracked. If the user moves back a few inches too far the tracking is lost altogether. My hope was that the ‘rules’ wouldn’t be so strict, that the user would have a little leeway, maybe 1 or 2 feet.

My theory behind using the Kinect was that if this installation was out on a sidewalk, the app/game could focus on the person in the foreground (closest to the display) and not people randomly walking behind him. The Kinect would know that the other people were out of range and therefore ignore them. Maybe there is still a way to use a regular camera and opencv instead?

What platform are you on?

OSX 10.6.8, OF 0062, XCode 3.2.2

I have 2 screenshots as examples. The first is TuioKinect only picking up my hands at a very specific depth and the second is ofxOpenNI (by gameoverhack) picking up my hands at various depths.

http://www.flickr.com/photos/gabe-molochko/5933272983
http://www.flickr.com/photos/gabe-molochko/5933272843

az_rr, unless I’m doing something wrong with the TuioKinect app? Thanks again for your insight!

I think I need to try and build my own OSC connection to Unity from OF and send hand tracking data from OpenNI (which is what jvcleave suggested). It will take me some time though to understand what OpenNI is doing and how to get usable data into Unity from the hand tracker.

In your tuiokinect screenshot you have the near and far threshold set very close to each other. If you make the gap bigger, the more space there is to for the kinect to track. This can result in too many tuio points being send to Unity. Depends on what you want to achieve.

I would add ofxTuio to the ofxOpenNi.

This is pretty hacky but it will send the two hands via tuio.

  
  
//get tracked user  
ofxTrackedUser* tracked = user.getTrackedUser(userId);  
	  
//set our left and right hand	  
leftHand = tracked->left_lower_arm.end;  
rightHand = tracked->right_lower_arm.end;  
					  
TuioTime frameTime = TuioTime::getSessionTime();  
tuioServer->initFrame(frameTime);  
  
//set our tuio point	  
TuioPoint tpl(leftHand.x/640,leftHand.y/480);  
TuioPoint tpr(rightHand.x/640,rightHand.y/480);  
  
//left hand  
TuioCursor *tcurl = tuioServer->getClosestTuioCursor(tpl.getX(),tpl.getY());  
  
if ((tcurl==NULL) || (tcurl->getDistance(&tpl)>0.2))   
{   
	tcurl = tuioServer->addTuioCursor(tpl.getX(), tpl.getY());  
	updateKalman(tcurl->getCursorID(),tcurl);  
}   
else   
{  
	TuioPoint kpl = updateKalman(tcurl->getCursorID(),tpl);  
	tuioServer->updateTuioCursor(tcurl, kpl.getX(), kpl.getY());  
}  
  
//right hand	  
TuioCursor *tcurr = tuioServer->getClosestTuioCursor(tpr.getX(),tpr.getY());  
  
if ((tcurr==NULL) || (tcurr->getDistance(&tpr)>0.2))   
{   
	tcurr = tuioServer->addTuioCursor(tpr.getX(), tpr.getY());  
	updateKalman(tcurr->getCursorID(),tcurr);  
}   
else   
{  
	TuioPoint kpr = updateKalman(tcurr->getCursorID(),tpr);  
	tuioServer->updateTuioCursor(tcurr, kpr.getX(), kpr.getY());  
}