Why ofxKinectV2 point vec3 have NaN?

Hello everyone :grinning:

I’m using kinect2 in mac os, and it works perfectly fine.

What I’m trying is to get a vector of points given by this addon https://github.com/ofTheo/ofxKinectV2

In the example of this addon, there is already a ofMesh pointCloud that receives the coordinates from kinect.getWorldCoordinateAt(x,y)
So I just printed it out with stringstream and ofDrawBitmapString

stringstream ss;
for(int i=0; i<50; ++i){
  ss << pointCloud.getVertices()[i] << endl;
  }
ofDrawBitmapStringHighlight(ss.str(), 30, 50);

and I could see tons of NaN values.

Why is it NaN?
I also tried to filter out these NaN points by doing

glm::vec3 & pt = kinect.getWorldCoordinateAt(x,y);
if(!isnan(pt.x)) pointClound.addVertex(pt);

And the resulting point cloud was having empty stripes that look like it has lost many points.

So, I guess this NaN values are visually represented as points, but why are they NaN?

Thank you for any tips!

Hi, that is most probably because the data is invalid. When the camera “sees” a pixel that its depth is further or closer than the usable range it might be invalid. Also the subjects can make a “shadow” which will make data invalid. When you look at the depth image as a grayscale image, usually all the pure black areas are invalid depth data. It is completely normal to have such

Hi!

Can you share the full code or a full example that demonstrates the issue?
If you are missing data that is normally drawn, after adding the nan checks that feels like something isn’t right. If it draws correctly it wouldn’t be a nan.

Thanks!
Theo

Hi @roymacdonald and @theo, thanks for the reply!

I did some inspection on my code, and just found that this weird effect was caused by a silly mistake :sweat_smile:

But I will explain it here anyway.
The ofApp.h remains the same as the original example of the addon.

// ofApp.cpp
#include "ofApp.h"

void ofApp::setup()
{
    ofSetLogLevel(OF_LOG_VERBOSE);

    ofBackground(0);
    ofxKinectV2 tmp;
    std::vector <ofxKinectV2::KinectDeviceInfo> deviceList = tmp.getDeviceList();
    
    kinects.resize(deviceList.size());
    texDepth.resize(kinects.size());
    texRGB.resize(kinects.size());
    texRGBRegistered.resize(kinects.size());
    texIR.resize(kinects.size());

    panel.setup("", "settings.xml", 10, 100);
    
    ofxKinectV2::Settings ksettings;
    ksettings.enableRGB = true;
    ksettings.enableIR = true;
    ksettings.enableDepth = true;
    ksettings.enableRGBRegistration = true;
    ksettings.config.MinDepth = 0.5;
    ksettings.config.MaxDepth = 8.0;
  
    for(int d = 0; d < kinects.size(); d++) {
        kinects[d] = std::make_shared<ofxKinectV2>();
        kinects[d]->open(deviceList[d].serial, ksettings);
        panel.add(kinects[d]->params);
    }

    panel.loadFromFile("settings.xml");
    pointCloud.setMode(OF_PRIMITIVE_POINTS);

}


void ofApp::update()
{
    for (int d = 0; d < kinects.size(); d++)
    {
        kinects[d]->update();
        
        if (kinects[d]->isFrameNew())
        {
            if( kinects[d]->isRGBEnabled()) texRGB[d].loadData(kinects[d]->getPixels());
            if(kinects[d]->getRegisteredPixels().getWidth() > 10) texRGBRegistered[d].loadData(kinects[d]->getRegisteredPixels());
            if(kinects[d]->isIREnabled()) texIR[d].loadData(kinects[d]->getIRPixels());
            if(kinects[d]->isDepthEnabled() ) texDepth[d].loadData(kinects[d]->getDepthPixels());

            if (showPointCloud)
            {
                pointCloud.clear();
                for (std::size_t x = 0; x < texRGBRegistered[d].getWidth(); x++)
                {
                    for (std::size_t y = 0; y < texRGBRegistered[d].getHeight(); y++)
                    {
                        const glm::vec3 & p = kinects[d]->getWorldCoordinateAt(x, y);
                        if(!isnan(p.x)){
                            pointCloud.addVertex(p);
                        }
                        pointCloud.addColor(kinects[d]->getRegisteredPixels().getColor(x, y));
                    }
                }
            }
        }
    }
}


void ofApp::draw()
{
    if (!showPointCloud && currentKinect < texRGB.size())
    {
        drawTextureAtRowAndColumn("RGB Pixels",
                                  texRGB[currentKinect],
                                  0, 0);
 
        drawTextureAtRowAndColumn("RGB Pixels, Registered",
                                  texRGBRegistered[currentKinect],
                                  1, 0);

        drawTextureAtRowAndColumn("Depth Pixels, Mapped",
                                  texDepth[currentKinect],
                                  1, 1);

        drawTextureAtRowAndColumn("IR Pixels, Mapped",
                                  texIR[currentKinect],
                                  0, 1);
    }
    else
    {
        cam.begin();
        ofPushMatrix();
        ofScale(100, -100, -100);
        pointCloud.draw();
        ofPopMatrix();
        cam.end();
    }
    
    if( kinects.size() < 1 ) {
        ofDrawBitmapStringHighlight( "No Kinects Detected", 40, 40 );
    }
        
    panel.draw();
}


void ofApp::keyPressed(int key)
{
    if (key == ' ')
    {
        currentKinect = (currentKinect + 1) % kinects.size();
    }
    else if (key == 'p')
    {
        showPointCloud = !showPointCloud;
    }
}


void ofApp::drawTextureAtRowAndColumn(const std::string& title,
                                      const ofTexture& tex,
                                      int row,
                                      int column)
{
    float displayWidth = ofGetWidth() / numColumns;
    float displayHeight = ofGetHeight() / numRows;
    
    ofRectangle targetRectangle(row * displayWidth,
                                column * displayHeight,
                                displayWidth,
                                displayHeight);
    
    ofNoFill();
    ofSetColor(ofColor::gray);
    ofDrawRectangle(targetRectangle);
    
    ofFill();
    ofSetColor(255);
    if (tex.isAllocated())
    {
        ofRectangle textureRectangle(0, 0, tex.getWidth(), tex.getHeight());
        
        // Scale the texture rectangle to its new location and size.
        textureRectangle.scaleTo(targetRectangle);
        tex.draw(textureRectangle);
    }
    else
    {
        ofDrawBitmapStringHighlight("...",
                                    targetRectangle.getCenter().x,
                                    targetRectangle.getCenter().y);
    }
    
    ofDrawBitmapStringHighlight(title,
                                targetRectangle.getPosition() + glm::vec3(14, 20, 0));
}

This ofApp.cpp also remains quite the same as the original example, but in ofApp::update()


                        if(!isnan(p.x)){
                            pointCloud.addVertex(p);
                        }
                        pointCloud.addColor(kinects[d]->getRegisteredPixels().getColor(x, y));

was added in order to filter out the nan points.
And apparently, the color of the pixels was mapped in a wrong way. The effect was gone after I put the addColor() into the conditional scope.

Thank you to both of you!

But could I ask one more thing regarding this addon?
I was hoping to get the points from a pre-limited distance, e.g just like the depth image limited by the ksettings.config.MinDepth and .MaxDepth.

Is there any this kind of function that is implemented?
I’m now doing it by checking the z value,

if(p.z<1) pointCloud.addVertex(p);

but would be nice to know if there is a more efficient way to do it.

Thanks!