Starting with OF on Raspberry Pi

I am a complete novice at using the Raspberry Pi but I have a project that requires me to perform skeleton detection using an Xtion Pro (PrimeSense 3D sensor). I need to keep the Pi standalone. I have installed OpenNI2, OpenCV 2.3.1, and OpenFramworks but I’m not sure where to go from here. My Pi is connected to my laptop via an ethernet and I am controlling my Pi through VNC from the laptop. My laptop is also providing internet to the Pi. I followed the OF official installation instructions for the Pi but when I run sample apps, the terminal just says “runappviainfiniteloop()” and nothing pops up or happens. I feel like I’m missing something really fundamental and would really appreciate help.

Hey there, do you have a physical monitor of any kind hooked up to the Raspberry Pi, either via HDMI or composite video output?

Not as of now. I view my RPi screen through a VNC viewer on my laptop.

I would recommend trying your setup with a physical monitor hooked up. Depending on what VNC system you are using, it may not be compatible with our ofAppEGLWindow code.

Okay so you are recommending that I view and control my Pi through VNC but use an HDMI output connected to a monitor as a second viewer. And the app will run on that second screen? I will test this soon. Also, let’s say this works and my samples run, would I be able to run a program for skeleton detection through OF on the pi without any monitors (because it needs to be portable)? If so, how do I then write my own code? And what do I do to use OpenNI and OpenCV in that code? Sorry for the influx of questions, I am just trying to learn the ropes still.

I would would recommend using ssh or interacting with your pi directly (hook up a keyboard and use a monitor) rather than VNC. VNC will likely slow your system down. Speed is particularly important because in my experience the Xtion Pro will run, but very very slowly. I have not tried to do skeleton detection, but I am skeptical that the results will be satisfying speed-wise. You should give this one a try:

@jvcleave may have a bit more recent experience with this.

Alright I will set up my Pi with SSH then. Are you saying that if I don’t use SSH, the speed for skeleton detection won’t be satisfying or will it not be good either way? Alright I will contact @jvcleave for more information. Thank you!

I am currently working on setting up SSH. Should i use just SSH or SSH with VNC to get GUI? This link (http://www.akeric.com/blog/?p=1826) says that GUI only works with VNC.

SSH will be faster, but it still OpenNI still might not be fast enough on the Raspberry Pi in general. The Pi is an amazing piece of hardware – particularly when it comes to hardware accelerated graphics and multi-media decoding etc – but in my experience is not ideal for pure CPU intensive computer vision applications. It is certainly worth a try though!

Good luck!

i second @bakercp tip to go with ssh. Once you get used to using terminal (as using linux forces you in some way to do so), you’ll wonder why you bothered using a graphical desktop :stuck_out_tongue:

id first edit and compile code on your main machine(though not ALL addons/scripts are formatted for arm6) and port over.

then, cd into the sketch path, and make . if you need to adjust the sketch a bit natively, make clean then wash, re rinse.

ah okay. I’ll give OpenNI a shot. I have sent an email to @jvcleave for assistance in integrated OF with OpenNI and OpenCV. Main machine as in my laptop right? In that case, does you have experience with all of this on Windows?

though oF is cross compilable, some addons incorporate other libraries/drivers that may only be formatted for the specific OS. I havent used openNI except on mac, so id dig into the addons directories along with looking at the main OpenNI site.

Skeleton detection isn’t available on the RPi as it is provided my OpenNI’s Nite middleware (not open source) which does not have a public hard-float ARM version last time I checked.

What I am trying to do is put an Xtion Pro on an IV stand at chest level so that the IV stand can follow a patient around. For this, what I basically need to know is how far away a patient is from the stand and his/her position in the field of view of the sensor so the stand knows to turn to position the patient at the center of its field of view. OpenNI and OpenCV can detect joints right? So cannot I use those joints to determine a patient’s distance and “left/right” position?

It’s a bit confusing how the OpenNI pieces work. OpenNI gives you access to the Depth/RGB streams of the Xtion Pro. You then need OpenNI’s Nite middleware to give you the skeleton/joint information.

If I were you I would develop on the Windows version of openFrameworks using OpenNI2/Nite and then use a small form factor PC (although much more expensive than the Pi)

Most of the time, however, the patient will be the closest object to the sensor. Cannot we use that piece of information with the depth data to analyze the stream and get the info we need?

something to note:

if you don’t necessarily need 3D depth information, you can create your own 2D skeleton tracking addon that uses assorted haar training classifiers. I’ve come across many for eye, upper/lower torso, full body, etc. once you get the classifier, its just a matter of creating a good algorithm for measuring misc. contours, joints, velocity, etc. I know @theo had mentioned he would release his “poor man’s kinect” 2D skeleton tracker. Not sure whats going on with that, but curious to get my hands on it too.

Yes I have seen some information about that and am planning on using just OpenCV to do it so I am not hampered by the limitations of OpenNI on the Pi. Thank you!