ofxOpenNI Development

Do you know how I can assign the depth pixels to an ofxCvGrayScaleImage image?
Does the getDepthPixels returns unsigned char? I can’t assign the value from depthThreshold.getMaskPixels to
an ofxCvGrayScaleImage variable…
I could this in the past… what as changed?


No that is not your problem.

By definition the types are the same or else the program would not compile!

I think maybe your problem is that you are trying to set a grayscale image with RGBA pixels: maskPixels are all RGBA pixels/textures, with the Alpha channel set to be the mask. This is a change from the older API where maskPixels were grayscale with no alpha channel.

As an idea: why not just use the depthPixels from the depthThreshold to do your finger tracking? They are already ‘masked’ in the sense that they only contain data from inside the depthThreshold ROI…

Just to confirm, adding this just after ‘depthThreshold.drawMask();’ in my example works as I would expect it to:

ofImage colImg;  
if(depthThreshold.getMaskPixels().getPixels() != NULL){ // need to check for NULL just in case!!  
     colImg.setFromPixels(depthThreshold.getMaskPixels().getPixels(), 640, 480, OF_IMAGE_COLOR_ALPHA);  


Yes the depthThreshold.drawMask() works fine. I also can get the mask pixels into an ofImage.
I can get the image pixels into a cv::Mat image, but not the depth or mask pixels.
Here is what I want to do:

  1. Extract the detected hand as you can see in the attached image;
  2. convert the extracted hand to grayscale;
  3. use the hand mask pixels (extracted roi from depth image) to extract only the hand gray scale pixels (see attached image);
  4. apply some methods I made to the hand binary image and hand gray scale image (extract hand features)

The code I’m trying to use is this:

#include “ofxOpenNI.h”
#include “ofxCv.h”

ofxOpenNI openNIDevice;
ofxOpenNIDepthThreshold dt;
ofxOpenNIROI depthROI;
ofxOpenNIHand hand;

Mat matDepth; cv::Mat depth information

ofImage ofDepth; // just here for tests
vector depth; // for multiple hand tracking

void testApp::setup()


// setup the hand generator

// add all focus gestures (ie., wave, click, raise arm)


ofImage temp;
temp.allocate(640, 480, OF_IMAGE_COLOR); // I’ve tried all the image types and works OK for the ofImage

for(int i = 0; i < openNIDevice.getMaxNumHands(); i++) {
dt = ofxOpenNIDepthThreshold(0, 0, false, true, true, true, true);



ofDepth.allocate(640, 480, OF_IMAGE_COLOR_ALPHA);
matDepth = Mat(cv::Size(ofDepth.width, ofDepth.height), ofDepth.type);


void testApp::exit()

void testApp::update()


for (int i = 0; i < openNIDevice.getNumTrackedHands(); i++) {
// get hand position
ofPoint & handWorldPosition = hand.getPosition();
ofxOpenNIDepthThreshold &dt = openNIDevice.getDepthThreshold(i);
/*ofPoint leftBottomNearWorld = handWorldPosition - 100;
ofPoint rightTopFarWorld = handWorldPosition + 100;

depthROI = ofxOpenNIROI(leftBottomNearWorld, rightTopFarWorld);


matDepth = Mat(cv::Size(ofDepth.getWidth(), ofDepth.getHeight()), CV_8UC4, ofDepth.getPixels());
// or the toCv option that I’ve tried also ----> matDepth = toCv(ofDepth.getPixels());




void testApp::draw()
ofSetColor(255, 255, 255);

openNIDevice.drawDepth(0, 0);
openNIDevice.drawImage(640, 0, 320, 240);

for (int i = 0; i < openNIDevice.getNumTrackedHands(); i++) {
hand = openNIDevice.getTrackedHand(i);
ofxOpenNIDepthThreshold &dt = openNIDevice.getDepthThreshold(i);

ofImage temp;
temp.allocate(640, 480, OF_IMAGE_GRAYSCALE);
matDepth = toCv(dt.getDepthPixels());
ofTranslate(320*i, 480);
ofScale(0.5, 0.5);

ofDepth.draw(0, 0);
drawMat(matDepth, 640, 0);

//depth[i].draw(0, 0);



The results I obtain with this are the ones you can see at the attached files. :frowning:


![](http://forum.openframeworks.cc/uploads/default/2419/Screen Shot 2012-06-11 at 09.55.12.png)

![](http://forum.openframeworks.cc/uploads/default/2420/Screen Shot 2012-06-11 at 10.12.11.png)

![](http://forum.openframeworks.cc/uploads/default/2421/Screen Shot 2012-06-11 at 10.12.33.png)

I forgot to say that in point 3. the mask is logically anded with the hand image to extract the hand gray pixels
(handROI & handMask ----> the two are cv::Mat images).



Hi everyone,

I solved my problem this afternoon.
Attached are the testApp files I used.
It had to be with the mask pixels being RGBA, so I had create a cv::Mat mask matrix by looking into the mask pixels.
Now I can extract the hand gray pixels with this mask and use all the algorithms I had for hand feature extraction.
Best regards and thank you all that help…
You are fantastic!



Hey paulo

Good to hear you solved the problem.

Since maybe it’s useful for others I added a setMaskPixelFormat method to ofxOpenNIUsers and ofxOpenNIDepthThreshold and a setMaskPixelFormatAllUsers method to ofxOpenNI.

This let’s you decide whether the maskPixels and corresponding maskTexture use RGBA or MONO pixels.

So now you can do:

for(int i = 0; i < openNIDevice.getMaxNumHands(); i++){  
    ofxOpenNIDepthThreshold depthThreshold = ofxOpenNIDepthThreshold(0, 0);  
    depthThreshold.setMaskPixelFormat(OF_PIXELS_MONO); // default is OF_PIXELS_RGBA but this makes them grayscale!  

So you shouldn’t have to iterate the maskPixels to set the cvMat now.

Wondering why/what you use the rgb/grayscale image for? Can you share the algorithms for hand feature extraction? I kind of thought you’d be able to just use the maskPixels to get reasonably accurate finger tracking (but then I’ve not tried it)…

PS: Just in case you’re not sure, there is no need to quote your own posts (especially if they’re quite long) and if you click on the button with a hash symbol (#) you can mark your code to be properly formated…the tags look like [ code ][ /code ] … anything in between them will be correctly formated with line numbers and highlighting.


Thanks for your excellent work! That is fantastic and very useful.
I will try it. It saves a lot of work.

I’m still optimizing the algorithms for feature extraction, and are part of my PhD. As soon as I can share them I will. You all deserve ;-). I have a finger detection algorithm that I’m going to adapt for the recent changes, and then I can share it… so much things to do and so little time.

As for the quotes… sorry… I’m still trying to learn how to use the forum, and I never know were to answer …
It will get better , I hope :slight_smile:

I have been testing the ofxOpenNI addon today to play with the complete PointCloud via depthThreshold (not the user one). I think I got a bug when using the “ImageGenerator”. I use as testing app almost the simple example, registering depth and image and just one depththeshold (neither users nor hands…).

The app crashed when running for first time. After doing some tests it seems that there is not color Image data yet when the program runs the next lines in “void ofxOpenNI::updateDepthThresholds” (ofxOpenni.cpp):

                    const XnRGB24Pixel*	pColor;  
                    if(g_bIsImageOn){ //<- Here a flag as a dirty workaround  
                        pColor = g_ImageMD.RGB24Data();  
                        depthThreshold.pointCloud[0].addColor(ofColor(pColor[nIndex].nRed, pColor[nIndex].nGreen, pColor[nIndex].nBlue)); //<- IT CRASHES HERE  
                    depthThreshold.bNewPointCloud = true;  

I didn’t have time to solve it in a elegant manner, but if you set manually a dirty flag (check the previous code) and you wait till receiving image in the screen to enable the flag and go into the


, then the point cloud is correctly generated.
Hope this can help someone. Anyway, where would be the appropiated point to stablish and automated flag to avoid update the depthThreshold until start receiving data from the ImageGenerator?

Besides, the example apps run in my computer very slowly just with the example. The user recognition example stops recognizing in very few seconds. Does anyone experience such low speed and artifacts in a windows machine, or maybe it is just my old computer?

Test environment: Windows 7 64bits, Codeblocks and OpenNI & NIte My computer is a 4GB ram, Intel Core Duo 2.53Ghz. OpenFrameworks v7.1 development branch.

Thanks a lot!
BTW, very good and complete work with this addon!! Thanks for it!

Hi gameover & paulo,

I tested hand tracking examples of development branch with Win7/VS2010/OF007. (src-HandTracking-Medium/src-HandTracking-Simple). I found that the program in both release/debug mode will crash when a tracked hand moves too fast. Error message:

Debug assertion failed!  
File: c:\program files\microsoft visual studio 10.0\vc\include\xtree  
Line: 256  
0x00a63e30 "map/set iterator not incrementable"	const wchar_t *  

Any idea?


I’m using mac osx without any problems.
I have no experience with VS2010. Are you sure you included all the addons?
Good luck. I hope you can solve the problem… no, I’m sure you will solve the problem :slight_smile:
Best regards,


I tried your code like this:

setup() {  
for(int i = 0; i < openNIDevice.getMaxNumHands(); i++) {  
        dt = ofxOpenNIDepthThreshold(0, 0);  
update() {  
  depthMat = toCv(dt.getMaskPixels());  
  cout << depthMat.rows << ":" << depthMat.cols << ":" << endl;  

but my resulting cvMat always becomes a (0,0) Mat.

I tried first to get the pixels to an ofImage,

ofImage temp;  
                temp.setFromPixels(dt.getMaskPixels().getPixels(), width, heigth, OF_IMAGE_GRAYSCALE);  
                depthMat = toCv(temp.getPixelsRef());  
                cout << depthMat.rows << ":" << depthMat.cols << ":" << endl;  

but it gives error on setFromPixels.

template<typename PixelType>  
void ofPixels_<PixelType>::setFromPixels(const PixelType * newPixels,int w, int h, int channels){  
	allocate(w, h, channels);  
	memcpy(pixels, newPixels, w * h * getBytesPerPixel());  

EX_BAD_ACCESS(code=2, address=0x0)

Also with an ofxCvGrayscale image… the same result.

I tested again with 0071 master branch and got it work, so seems the development branch is already incompatible with 007.

For master branch, is there any way to get the position of a neck joint on screen? (Similar to ofxTrackedHand.projectPos)

How can I play multiple .oni videos and extract the image, depth and hands as in live video.
What is the order of setup commands when initializing the openNIKinect?

I have my code like this:

void testApp::setupVideo()  
    for(int i = 0; i < player.getMaxNumHands(); i++) {  
        ofxOpenNIDepthThreshold dt = ofxOpenNIDepthThreshold(0, 0);  
when start playing.....  

but it crashes on the depth image acquisition??!?!

Any help?


I think I managed to solve the previous problem, but I have another one.

I’m trying to save videos with hand gestures for later processing, but the mask pixels only get allocated after hand gesture recognition (ofxOpenNIDevice[0]: OF_LOG_VERBOSE: Allocating mask pixels for depthThreshold).

This gives me an error when getting the mask pixels with depthThreshold.getMaskPixels(). The size is 0:0.
Is there a way to get around this problem?



Can the depth mask pixels be allocated sooner than they are at the moment?
I have this kind of problem:

ofxOpenNIDevice[0]: OF_LOG_VERBOSE: (CB) Hands Create: OK112
ofxOpenNIDevice[0]: OF_LOG_VERBOSE: Allocating mask pixels for depthThreshold
ofxOpenNIDevice[0]: OF_LOG_VERBOSE: Allocating mask texture for depthThreshold
OpenCV Error: Assertion failed (0 <= roi.x && 0 <= roi.width && roi.x + roi.width <= m.cols && 0 <= roi.y && 0 <= roi.height && roi.y + roi.height <= m.rows) in Mat, file /Users/theo/Downloads/OpenCV-2.3.1/modules/core/src/matrix.cpp, line 303

when trying to extract something from the depth image (in this case a ROI). the depth image as size 0.
Sometimes it gives the depth image ok but only after a period of time… do I have to use the method isAllocated()?
Any ideia?


if I use the latest experimental commit, the app crashes during startup. If I revert to the “Added setMaskPixelFormatAllUsers and setMaskPixelFormat to ofxOpenNI, ofxOpenNIUser, ofxOpenNIDepthThreshold” commit then it works again. It seems to be the Changed scoped lock() to lock(mutex) commit which doesnt make it work for me.

ofxOpenNIDevice[0]: OF_LOG_WARNING: Using a NASTY hack to silence SIGNAL errors on exit - read the comments at line ~1712 of ofxOpenNI.cpp  
ofxOpenNIDevice[0]: OF_LOG_NOTICE: Init context...  
ofxOpenNIDevice[0]: OF_LOG_VERBOSE: Context initilizedstatus:OK  
ofxOpenNIDevice[0]: OF_LOG_NOTICE: openni driver version:  
ofxOpenNIDevice[0]: OF_LOG_NOTICE: Adding licence...  
ofxOpenNIDevice[0]: OF_LOG_VERBOSE: Adding licence: PrimeSense 0KOIk2JeIBYClPWVnMoRKn5cdY4=status:OK  
ofxOpenNIDevice[0]: OF_LOG_NOTICE: Init device...  
ofxOpenNIDevice[0]: OF_LOG_VERBOSE: Enumerate devicesstatus:OK  
ofxOpenNIDevice[0]: OF_LOG_NOTICE: Found1devices connected  
ofxOpenNIDevice[0]: OF_LOG_VERBOSE: Creating production tree for device 0status:OK  
ofxOpenNIDevice[0]: OF_LOG_NOTICE: Adding generator typeXN_NODE_TYPE_IMAGE  
ofxOpenNIDevice[0]: OF_LOG_VERBOSE: Creating XN_NODE_TYPE_IMAGE generatorstatus:OK  
ofxOpenNIDevice[0]: OF_LOG_VERBOSE: Setting Image1 resolution: 640 x 480 at 30fpsstatus:OK  
ofxOpenNIDevice[0]: OF_LOG_VERBOSE: Starting XN_NODE_TYPE_IMAGE generatorstatus:OK  
ofxOpenNIDevice[0]: OF_LOG_VERBOSE: Allocating image  
ofxOpenNIDevice[0]: OF_LOG_NOTICE: Adding generator typeXN_NODE_TYPE_DEPTH  
ofxOpenNIDevice[0]: OF_LOG_VERBOSE: Creating XN_NODE_TYPE_DEPTH generatorstatus:OK  
ofxOpenNIDevice[0]: OF_LOG_VERBOSE: Setting Depth1 resolution: 640 x 480 at 30fpsstatus:OK  
ofxOpenNIDevice[0]: OF_LOG_VERBOSE: Starting XN_NODE_TYPE_DEPTH generatorstatus:OK  
ofxOpenNIDevice[0]: OF_LOG_VERBOSE: Allocating depth  
ofxOpenNIDevice[0]: OF_LOG_VERBOSE: Register viewpoint depth to RGBstatus:OK  
ofxOpenNIDevice[0]: OF_LOG_VERBOSE: Set mirror depth ONstatus:OK  
ofxOpenNIDevice[0]: OF_LOG_VERBOSE: Set mirror image ONstatus:OK  
ofxOpenNIDevice[0]: OF_LOG_NOTICE: Adding generator typeXN_NODE_TYPE_USER  
ofxOpenNIDevice[0]: OF_LOG_VERBOSE: Creating XN_NODE_TYPE_USER generatorstatus:OK  
ofxOpenNIDevice[0]: OF_LOG_VERBOSE: Starting XN_NODE_TYPE_USER generatorstatus:OK  
ofxOpenNIDevice[0]: OF_LOG_VERBOSE: Allocating users  
ofxOpenNIDevice[0]: OF_LOG_NOTICE: User generator DOES NOT require pose for calibration  
ofxOpenNIDevice[0]: OF_LOG_VERBOSE: Set skeleton profilestatus:OK  
ofxOpenNIDevice[0]: OF_LOG_NOTICE: Starting ofxOpenNI with threading  
Program ended with exit code: 0  

if I search and replace

if(bIsThreaded) Poco::ScopedLock<ofMutex> lock(mutex);  

back to

if(bIsThreaded) Poco::ScopedLock<ofMutex> lock();  

it works. No idea if that breaks other stuff, or what the mutex command means. I am on OSX, 10.7 xcode, compiled for 10.6 and OF0071

I have a question or possible suggestion. I dont know oif there is something I dont understand yet and its possible with the current version too, but I tweaked the ofxOpenNIUserEvent to be using the custom dispatcher, that way I can subscribe to it anywhere in my code

extern ofEvent<ofxOpenNIUserEvent> ofxOpenNIUserEventDispatcher;  

Another possible bug I found is that when I reply a .oni the app crashes on loop. But only if I have a user generator active.