Kinect and Strings


I am very new to strings and have done a couple of examples with the Kinect. I want to achieve the effect of a silhouette being typed in text. The text is a passage which I’m loading as a string. I’m splitting the problem into two parts:

  1. The typing effect. I have experimented by dividing a passage into a vector of strings using ofSplitString(passage, " "); I’m not sure if I would be better off with ofTrueTypeFont and using drawStringsAsShapes or stringWidth and stringHeight.

  2. Identifying the blob of the kinect as though it were a canvas where the words start typing from top to bottom and left to right.

Thanks in advance

Hi, do you want to draw the text around the silhouette, thus rotating and moving each letter so it follows the silhouette, or you want to have just some horizontal lines of text that adjust and fit inside the silhouette?
In any case I recommend you to look at @zach 's repos as there are lots of examples that can help.

in any case you need to use ofTrueTypeFont (or some other addon for rendering text).
Using drawStringAsShapes instead of drawString, makes no difference as of using stringHeightand stringWidth, where you need to use these two in order to fit the string into place.

So if instead of writing the words from left to right, i am just interested in the silhouette being populated with a vector of words at one distance and another vector of words at a slightly further distance. Should I just create a third threshold like this to achieve the effect:

colorImg.allocate(kinect.width, kinect.height);
	grayImage.allocate(kinect.width, kinect.height);
	grayThreshNear.allocate(kinect.width, kinect.height);
    grayThreshMiddle.allocate(kinect.width, kinect.height);
    grayThreshFar.allocate(kinect.width, kinect.height);
	nearThreshold = 255;
	middleThreshold = 225;
    farThreshold = 190;
	bThreshWithOpenCV = true;
	// zero the tilt on startup
	angle = 0;
	// start from the front
	bDrawPointCloud = false;

void ofApp::update() {
//    typewriter.update();
	// there is a new frame and we are connected
	if(kinect.isFrameNew()) {
		// load grayscale depth image from the kinect source
		// we do two thresholds - one for the far plane and one for the near plane
		// we then do a cvAnd to get the pixels which are a union of the two thresholds
		if(bThreshWithOpenCV) {
			grayThreshNear = grayImage;
            grayThreshMiddle = grayImage;
			grayThreshFar = grayImage;
			grayThreshNear.threshold(nearThreshold, true);
            grayThreshMiddle.threshold(middleThreshold, true);
			cvAnd(grayThreshNear.getCvImage(), grayThreshMiddle.getCvImage(), grayImage.getCvImage(), NULL);
            cvAnd(grayThreshMiddle.getCvImage(), grayThreshFar.getCvImage(), grayImage.getCvImage(), NULL);
		} else {
			// or we do it ourselves - show people how they can work with the pixels
			ofPixels & pix = grayImage.getPixels();
			int numPixels = pix.size();
			for(int i = 0; i < numPixels; i++) {
				if(pix[i] < nearThreshold && pix[i] > middleThreshold) {
					pix[i] = 255;
				} else if(pix[i] < middleThreshold && pix[i] > farThreshold)  {
					pix[i] = 100;
                } else {
                    pix [i];
		// update the cv images
		// find contours which are between the size of 20 pixels and 1/3 the w*h pixels.
		// also, find holes is set to true so we will get interior contours as well....
		contourFinder.findContours(grayImage, 10, (kinect.width*kinect.height)/2, 20, false);

is that thresholding working? like giving you anything useful? I would guess that it does not. Instead of that what would give you much better results is to use something like ofxClipper to find an offset of the polyline that the contour finder gives you.

I might be overcomplicating it. All I want is for the shape of any object that is between 50cm and 150cm from the kinect, to fill the silhouette with the word “close” repeated over and over and beyond 150cm to say “far”. What is the most efficient way for me to loop over all the pixels and get the depths values so I can then assign the conditionals?

do you mind if the word gets clipped? if not then the easiest way would be to simply mask a prerendered image with the words on it with the mask that the thresholds produce.

Yeah I considered that but it doesn’t quite have the same feel. I like the way it updates with the movement of the person and I would also like to affect the scale of the font eventually. This is my draw function so far. But again I only have that one threshold. Can I use ofgetDepthPixels()? or getBrightness to have a different output at a different distance?

float ratioX = ofGetWidth()/grayImage.getWidth();
    float ratioY = ofGetWidth()/grayImage.getHeight();
    int stepY = 5;
    int stepX = ofRandom(10,50);
    for (int y=0; y < grayImage.getHeight(); y+=stepY)
        for (int x=0; x < grayImage.getWidth(); x+=stepX)
            if (grayImage.getPixels().getColor(x,y) == 255 && ofGetFrameNum()%1 == 0)
                myFont.load("Courier New Bold", 10);
                myFont.drawString(wordlist2[(int)ofRandom(0, wordlist2.size())], x*ratioX, y*ratioY);
//              myFont.drawString("test",x*ratioX, y*ratioY);

you certainly can use getDepthPixels() to generate a different result.
There are loads of things you can do with that. It really depends on what you want to achieve. Maybe if you make some sort of sketch or drawing that could point in the direction of what you are looking for.
Also, the following line of code is completely unnecessary inside the draw and more even inside those nested for loops. Just put it in the setup and it will be fine.


is doing nothing. It will always be true, unless you change the 1 for some other number.

HI Thanks for the tips. This is more or less the idea. See how the numbers correspond to the distance (i dont necessarily want to do it as a point cloud) but I would like to learn how to access and store the raw depth data so i can then attach strings to those points.

I see.
I is actually a lot easier.
ofxKinect has some functions for that.
First, when you call kinect.getDepthPixels() what you get are the depth values that have been compressed so to fit in the 0 - 255 range (8bit data). Although when you call kinect.getRawDepthPixels() you get the real reading of the kinect, which is 11bits deep (actual data is stored as 16 bits, but it will only use 11 bits of these 16) and its numeric values for each pixel is more or less equal to millimeters in the real world.
Just as a helper function you can call

 kinect.getDistanceAt(int x, int y);

which returns a float with the depth value for that pixel.
Then if you want the position in the real world for a certain pixel in the image you can call

	kinect.getWorldCoordinateAt(int cx, int cy);/// center of image is (0.0)

Take a look at the kinect example as it shows how to use these to create a point cloud. Then from there you can easily change the drawing of a point by a letter or number or what ever you want to.

hope this helps.

Hi Roy,

Thank you, this is definitely helping. Two more questions:

What type of variable do I store Kinect.getPixels in (I don’t understand what an unsigned char Is).

How would I draw a string on the z axis if strings only take x,y coordinates?


you shouldn’t try to store what Kinect.getPixels() gives you as it would mean creating a copy which probably is unnecesary. In any case it returns an ofPixels object, which is esentially a collection of pixel data and some other info about it.

A nice trick that will avoid calling that function all the time is to call auto & pix = kinect.getDepthPixels(); which will create a reference to the depth pixels, your code is shorter and cleaner and it does not create a copy.
An unsigned char is just a kind of integer number which stores values between 0 and 255.
Take a look here for a more thorough explanation about images and pixels.
If you want to draw a string on z you can use

ofTranslate(x, y, z);
font.drawString("some text",0,0);

There are several other ways but this is probably the easiest one.

Thank you for the tips Roy, especially your essay on Pixels.

I am almost there with the effect that I want. How could I slow it down so that it’s more legible? I’ve slowed the frame rate down, but I was told that there are other ways. I want each iteration in the for loop to remain on the screen for at least 1 second.


lowering the framerate is a really bad idea, it will look choppy and there might be issues with the kinect updates.
Can you post the code that you have right now please, so I can point you how to make it slower please?

void ofApp::setup() {

    scaleFont = 30;
    myFont.load("Courier New Bold", scaleFont);
    PERFECT.push_back("Perfect  ");
    PERFECT.push_back("DontMove ");
    PERFECT.push_back("JackPot  ");
    PERFECT.push_back("PinIt    ");
    TOOCLOSE.push_back("Too Close");
    TOOCLOSE.push_back("Get Back");
    TOOCLOSE.push_back("Invading my Space");
    TOOFAR.push_back("COME NEAR");

    label2 = TOOCLOSE[(int)ofRandom(0, TOOCLOSE.size())];
    float widthPerfect = myFont.stringWidth(PERFECT[0]);
    float widthCLose = myFont.stringWidth(TOOCLOSE[0]);
    float widthFar = myFont.stringWidth(TOOFAR[0]);
    stepY = 5;
    stepX = 20;

	if(kinect.isConnected()) {
		ofLogNotice() << "sensor-emitter dist: " << kinect.getSensorEmitterDistance() << "cm";
		ofLogNotice() << "sensor-camera dist:  " << kinect.getSensorCameraDistance() << "cm";
		ofLogNotice() << "zero plane pixel size: " << kinect.getZeroPlanePixelSize() << "mm";
		ofLogNotice() << "zero plane dist: " << kinect.getZeroPlaneDistance() << "mm";
	colorImg.allocate(kinect.width, kinect.height);
	grayImage.allocate(kinect.width, kinect.height);
	grayThreshNear.allocate(kinect.width, kinect.height);
    grayThreshFar.allocate(kinect.width, kinect.height);
	nearThreshold = 255;
    farThreshold = 110;
	bThreshWithOpenCV = true;
	// zero the tilt on startup
	angle = 15;
//    cout<<angle<<endl;
	// start from the front
	bDrawPointCloud = false;

    float ratioX = ofGetWidth()/grayImage.getWidth();
    float ratioY = ofGetHeight()/grayImage.getHeight();
        for (int y=0; y < grayImage.getHeight(); y+=stepY)
            for (int x=0; x < grayImage.getWidth(); x+=stepX)
                float depth = kinect.getDistanceAt(x, y);
    //            cout<<depth<<endl;
                float s = ofMap(depth, 500, 3000, 0,3);
                if (grayImage.getPixels().getColor(x,y) == 255)
    // The Sweet Spot
                    if (depth>=800 && depth<=1200){
                    ofTranslate((ratioX*x) + widthPerfect, ratioY*y);
                    myFont.drawString(PERFECT[(int)ofRandom(0, PERFECT.size())], 0, 0);
    //Too Close
                }else if (depth<800){
                    myFont.drawString(TOOCLOSE[(int)ofRandom(0, TOOCLOSE.size())], 0,0);
    // Too Far
                }else if(depth>1200){
                    myFont.stringWidth(" ");
                    myFont.drawString(TOOFAR[(int)ofRandom(0, TOOFAR.size())], 0,0);
    //Masking around the books
    ofDrawRectangle(0, 0, ofGetWidth(), 10);
    ofDrawRectangle(0, 0, 255, ofGetHeight());
    ofDrawRectangle((ofGetWidth()-160), 0, ofGetWidth(), ofGetHeight());
//    ofDrawBitmapString(reportStream.str(), 20, 652);