How to make a slit scan skect with kinect IR?

Hi folks, first post
I completed Dan Buzzo’s tutorial for making a Kinect 3D photo booth which gave me the idea to make a slit scan sketch using the volumetric data from the Infrared camera. I’m quite fairly new to coding and I’ve given it a go but I’m not able to realise it.

My most basic idea is to just draw the middle column of vertex points - a slit - but keep updating and drawing that same middle slit along the x. I am able to read that slit and draw it but I’m stuck on how to keep drawing it horizontally side by side. Here’s an example snippet of my code and a screenshot. Thanks in advance

void ofApp::drawPointCloud() {
    ofMesh pointCloud;

    for (int y = 0; y < kinect.height; y++) {
        int x = kinect.width/2;

Hi, you need to store the points in each frame. So, put that ofMesh in ofApp.h, so it exists for the whole duration of the app running. Then before adding the points from the kinect, iterate through all the points in the pointCloud and move these sideways (change their x coordinate by some amount, probably 1 pixel will work). Then add the new vertices as you do in your code.
You might also want to delete some points if these go to far away .

1 Like

Thanks a lot for this!

here is a version I wrote a while ago building on the kinect demo examples - it may need some tweaking :slight_smile:

the principle is exactly as @roymacdonald says

grab the depth points you want from the middle of the depth data and then move them across a few pixels with each iteration…

if i get time I will dig out a kinect and check it still works ok :slight_smile:

 Project Title: slitScan3D
 Description: experiment in slitscan camera development with 3D sensor

#pragma once

#include "ofMain.h"
#include "ofxOpenCv.h"
#include "ofxKinect.h"

// Windows users:
// You MUST install the libfreenect kinect drivers in order to be able to use
// ofxKinect. Plug in the kinect and point your Windows Device Manager to the
// driver folder in:
//     ofxKinect/libs/libfreenect/platform/windows/inf
// This should install the Kinect camera, motor, & audio drivers.
// You CANNOT use this driver and the OpenNI driver with the same device. You
// will have to manually update the kinect device to use the libfreenect drivers
// and/or uninstall/reinstall it in Device Manager.
// No way around the Windows driver dance, sorry.

// uncomment this to read from two kinects simultaneously

// Dan Buzzo 2018 -
// for UWE Bristol, Creative Technology MSc, Creative Technology Toolkit module 2018-19
// with modified portions from oF kinect example;

class ofApp : public ofBaseApp {
    void setup();
    void update();
    void draw();
    void exit();
    void keyPressed(int key);
    void mouseDragged(int x, int y, int button);
    void mousePressed(int x, int y, int button);
    void mouseReleased(int x, int y, int button);
    void mouseEntered(int x, int y);
    void mouseExited(int x, int y);
    void windowResized(int w, int h);
    // this is the definition of a custom function that we will use to turn kinect data into a 3d point cloud
    void drawPointCloud();
    // here we define an instance of the kinect object to talk to our kinect sensor
    ofxKinect kinect;
    // define some image objects to store our image data as we work
    ofxCvColorImage colorImg;
    ofxCvGrayscaleImage grayImage; // grayscale depth image
    ofxCvGrayscaleImage grayThreshNear; // the near thresholded image
    ofxCvGrayscaleImage grayThreshFar; // the far thresholded image    
    ofxCvContourFinder contourFinder;
    // define boolean options for how the data will be rendered on screen- it is good practice to begin boolean variabes with a b_ to make your code more readable
    bool b_ThreshWithOpenCV;
    bool b_DrawPointCloud;
    // define variables for filtering the data from the kinect
    int nearThreshold;
    int farThreshold;
    int angle;
    // define a new virtual camera to be used for viewing our 3d data onscreen
    ofEasyCam easyCam;
    // material from slitScan example
    ofPixels videoDepthScan;
    ofPixels videoRGBScan;
    ofTexture videoDepthTexture;
    ofTexture videoRGBTexture;
    int camWidth;
    int camHeight;
    int y;

and the main

 Project Title: slitScan3D
 Description: experiment in slitscan camera development with 3D sensor

#include "ofApp.h"

 If you are struggling to get the device to connect ( especially Windows Users )
 please look at the ReadMe: in addons/ofxKinect/

// Dan Buzzo 2019 -
// with modified portions from oF kinect example; 

void ofApp::setup() {
    // enable depth->video image calibration
    // other options we could use
    //kinect.init(true); // shows infrared instead of RGB video image
    //kinect.init(false, false); // disable video image (faster fps);        // opens first available kinect
    // other options we could use
    //;    // open a kinect by id, starting with 0 (sorted by serial # lexicographically))
    //"A00362A08602047A");    // open a kinect using it's unique serial #
    // send some setup info out to the console for debugging
    // print the intrinsic IR sensor values
    if(kinect.isConnected()) {
        ofLogNotice() << "sensor-emitter dist: " << kinect.getSensorEmitterDistance() << "cm";
        ofLogNotice() << "sensor-camera dist:  " << kinect.getSensorCameraDistance() << "cm";
        ofLogNotice() << "zero plane pixel size: " << kinect.getZeroPlanePixelSize() << "mm";
        ofLogNotice() << "zero plane dist: " << kinect.getZeroPlaneDistance() << "mm";
    // set up the size of our image buffers we are going to use -
    // we make them all the same width and height as the data we will get from the kinect
    colorImg.allocate(kinect.width, kinect.height);
    grayImage.allocate(kinect.width, kinect.height);
    grayThreshNear.allocate(kinect.width, kinect.height);
    grayThreshFar.allocate(kinect.width, kinect.height);
    // set some values for filtering the data from the kinect
    nearThreshold = 230;
    farThreshold = 70;
    b_ThreshWithOpenCV = true;
    // zero the tilt on startup
    angle = 0;
    // start from the front
    b_DrawPointCloud = false;
    // slitscan setup
    camWidth =  kinect.getWidth();  // try to grab at this size from the camera.
    camHeight = kinect.getHeight();
    y = 0;
    videoDepthScan.allocate(camWidth,camHeight, OF_PIXELS_RGB);
    videoRGBScan.allocate(camWidth,camHeight, OF_PIXELS_RGB);

void ofApp::update() {
    ofBackground(100, 100, 100);
    // get fresh data from the kinect
    // if there is a new frame from the kinect and we are connected
    if(kinect.isFrameNew()) {
        // load grayscale depth image from the kinect source
        // we do two thresholds - one for the far plane and one for the near plane
        // we then do a cvAnd to get the pixels which are a union of the two thresholds
        if(b_ThreshWithOpenCV) {
            grayThreshNear = grayImage;
            grayThreshFar = grayImage;
            grayThreshNear.threshold(nearThreshold, true);
            cvAnd(grayThreshNear.getCvImage(), grayThreshFar.getCvImage(), grayImage.getCvImage(), NULL);
        } else {
            // or we do it ourselves - show people how they can work with the pixels
            ofPixels & pix = grayImage.getPixels();
            int numPixels = pix.size();
            for(int i = 0; i < numPixels; i++) {
                if(pix[i] < nearThreshold && pix[i] > farThreshold) {
                    pix[i] = 255;
                } else {
                    pix[i] = 0;
        // update the cv images
        // find contours which are between the size of 20 pixels and 1/3 the w*h pixels.
        // also, find holes is set to true so we will get interior contours as well....
        contourFinder.findContours(grayImage, 10, (kinect.width*kinect.height)/2, 20, false);
    // slitscan work
    //ofPixels & pixels = vidGrabber.getPixels();
    ofPixels & pixels = kinect.getDepthPixels();
    ofPixels & rgbpixels = kinect.getPixels();
    for (int x=0; x<camWidth; x++ ) { // loop through all the pixels on a line
        ofColor color = pixels.getColor( x,  y); // get the pixels on line y
        videoDepthScan.setColor(x, y, color);
        color = rgbpixels.getColor( x,  y); // get the pixels on line y
        videoRGBScan.setColor(x, y, color);
    if (y>=camHeight) {
        y=0; // if we are on the bottom line of the image then start at the top again
    } else {
        y+=1; // otherwise step on to the next line.
        // y+=2; // uncomment this instead to step on two lines at a time.

void ofApp::draw() {
    ofSetColor(255, 255, 255);
    if(b_DrawPointCloud) { // draw the data from the kinect as a 3d point cloud
    } else { // draw from raw data from the live kinect
        kinect.drawDepth(10, 10, 400, 300);
        kinect.draw(420, 10, 400, 300);
        grayImage.draw(10, 320, 400, 300);
        //contourFinder.draw(10, 320, 400, 300);
        videoDepthTexture.draw( 10, 320, camWidth, camHeight);
        videoRGBTexture.draw( 400, 320, camWidth, camHeight);
    // draw instructions
    ofSetColor(255, 255, 255);
    stringstream reportStream; // make a stringstream object that we can put text
    // here we assemble a string of text giving us a readout of what the kinect data is doing
    // and put it into the stringstream object we just made
    if(kinect.hasAccelControl()) {
        reportStream << "accel is: " << ofToString(kinect.getMksAccel().x, 2) << " / "
        << ofToString(kinect.getMksAccel().y, 2) << " / "
        << ofToString(kinect.getMksAccel().z, 2) << endl;
    } else {
        reportStream << "Note: this is a newer Xbox Kinect or Kinect For Windows device," << endl
        << "motor / led / accel controls are not currently supported" << endl << endl;
    reportStream << "press p to switch between images and point cloud, rotate the point cloud with the mouse" << endl
    << "using opencv threshold = " << b_ThreshWithOpenCV <<" (press spacebar)" << endl
    << "set near threshold " << nearThreshold << " (press: + -)" << endl
    << "set far threshold " << farThreshold << " (press: < >) num blobs found " << contourFinder.nBlobs
    << ", fps: " << ofGetFrameRate() << endl
    << "press c to close the connection and o to open it again, connection is: " << kinect.isConnected() << endl;
    if(kinect.hasCamTiltControl()) {
        reportStream << "press UP and DOWN to change the tilt angle: " << angle << " degrees" << endl
        << "press 1-5 & 0 to change the led mode" << endl;
    // here we draw our report stringstream object to the screen
   // ofDrawBitmapString(reportStream.str(), 20, 652);

// this is our custom function that we call to make a 3D point cloud from the raw kinect data
void ofApp::drawPointCloud() {
    int w = 640;
    int h = 480;
    // make a new 3d mesh, called 'mesh'
    ofMesh mesh;
    int step = 2;
    // step through each row of the data from the kinect using a a loop inside a loop
    // this loops through each line (the inner x loop) and after each line steps down to the next line (the outer y loop)
    for(int y = 0; y < h; y += step) {
        for(int x = 0; x < w; x += step) {
            if(kinect.getDistanceAt(x, y) > 0) {
                // we get the kinect data for each pixel and change each point in our mesh to correspond to the x,y,z and colour data
                ofVec3f v;
                v.set(x,y, kinect.getDistanceAt(w/2, y));
               // mesh.addVertex(x,y, kinect.getDistanceAt(x, y));
                //mesh.addVertex(kinect.getWorldCoordinateAt(x, h/2));
            //if(videoDepthTexture.getDistanceAt(x, y) > 0) {
                // we get the kinect data for each pixel and change each point in our mesh to correspond to the x,y,z and colour data
//            ofColor zGrey = 0;
//            zGrey =  videoRGBScan.getColor(x, y);
//            int z = zGrey.r;
//            ofVec3f v;
//            v.set(x,y,z);
//            mesh.addVertex(v);
//            mesh.addColor(videoDepthScan.getColor(x,y));
    glPointSize(3); // this sets the size of the dots we use when we draw the mesh as a 3Dpoint cloud
    // the projected points are 'upside down' and 'backwards'
    ofScale(1, -1, -1);
    ofTranslate(0, 0, -1000); // center the points a bit
    // here we draw our mesh object to the screen

void ofApp::exit() {
    // when we quit our app we remember to close the connection to the kinect sensor
    kinect.setCameraTiltAngle(0); // zero the tilt on exit

void ofApp::keyPressed (int key) {
    // this is a case statement -
    // rather than dozens of if - then statements we can say - get the keypress from the keyboard and do one from this list depending on the key value.
    switch (key) {
        case ' ':
            b_ThreshWithOpenCV = !b_ThreshWithOpenCV;
            b_DrawPointCloud = !b_DrawPointCloud;
        case '>':
        case '.':
            farThreshold ++;
            if (farThreshold > 255) farThreshold = 255;
        case '<':
        case ',':
            farThreshold --;
            if (farThreshold < 0) farThreshold = 0;
        case '+':
        case '=':
            nearThreshold ++;
            if (nearThreshold > 255) nearThreshold = 255;
        case '-':
            nearThreshold --;
            if (nearThreshold < 0) nearThreshold = 0;
        case 'w':
        case 'o':
            kinect.setCameraTiltAngle(angle); // go back to prev tilt
        case 'c':
            kinect.setCameraTiltAngle(0); // zero the tilt
        case '1':
        case '2':
        case '3':
        case '4':
        case '5':
        case '0':
        case OF_KEY_UP:
            if(angle>30) angle=30;
        case OF_KEY_DOWN:
            if(angle<-30) angle=-30;
             case 'f':

void ofApp::mouseDragged(int x, int y, int button)

void ofApp::mousePressed(int x, int y, int button)

void ofApp::mouseReleased(int x, int y, int button)

void ofApp::mouseEntered(int x, int y){

void ofApp::mouseExited(int x, int y){

void ofApp::windowResized(int w, int h)

This still works Dan thanks for your help :slight_smile:

1 Like