ofKinect, opencv contour detection and ofBox2d interaction problem

I am doing this project where I want to use kinect and opencv contour detection to interact with the box2d object.
I have found something similar (http://forum.openframeworks.cc/t/again-of-±box2d—diamonds-alchemy/4503/0) in of forum. can somebody help me to done it into kinect.

The process is pretty much the same as using a webcam. All you need to do is contourFind the depth camera in the kinect instead of using a webcam. Have a look at ofxKinect and the opencv example to see how they work.

I have got the kinect example with openCv. I am finding contour using findContour method. Now I want to convert the opencv blobs into ofbox2d ojects ( triangle or polygon etc). can you suggest some ways how to do it. any tutorial or example will be great help. thnks

check this out


Thanks but I am still stuck on the blobs to box2d conversion. Can somebody suggest some ways how I can get the blobs into box2d objects.

Here is how I did it recently

vector <ofPoint>    pts = your_ofxCvBlob.pts;  
ofxBox2dPolygon poly;  
	for (int i=0; i<pts.size(); i++)  
	poly.setPhysics(1.0, 0.3, 0.3);  

thanks with the above code i converted the blobs in the box2d object. now the problem is the program is running very slow and i dont know what is the reason so it is very hard to get the collision between two objects. please correct me if I am wrong somewhere.

#include "testApp.h"  
void testApp::setup() {  
	// enable depth->video image calibration  
	//kinect.init(true); // shows infrared instead of RGB video image  
	//kinect.init(false, false); // disable video image (faster fps)  
	colorImg.allocate(kinect.width, kinect.height);  
	grayImage.allocate(kinect.width, kinect.height);  
	grayThreshNear.allocate(kinect.width, kinect.height);  
	grayThreshFar.allocate(kinect.width, kinect.height);  
	nearThreshold = 230;  
	farThreshold = 150;  
	bThreshWithOpenCV = true;  
	// zero the tilt on startup  
	angle = 15;  
    // Box2d  
	box2d.setGravity(0, 20);  
void testApp::update() {  
	ofBackground(100, 100, 100);  
	// there is a new frame and we are connected  
	if(kinect.isFrameNew()) {  
		// load grayscale depth image from the kinect source  
		grayImage.setFromPixels(kinect.getDepthPixels(), kinect.width, kinect.height);  
		// we do two thresholds - one for the far plane and one for the near plane  
		// we then do a cvAnd to get the pixels which are a union of the two thresholds  
		if(bThreshWithOpenCV) {  
			grayThreshNear = grayImage;  
			grayThreshFar = grayImage;  
			grayThreshNear.threshold(nearThreshold, true);  
			cvAnd(grayThreshNear.getCvImage(), grayThreshFar.getCvImage(), grayImage.getCvImage(), NULL);  
		} else {  
			// or we do it ourselves - show people how they can work with the pixels  
			unsigned char * pix = grayImage.getPixels();  
			int numPixels = grayImage.getWidth() * grayImage.getHeight();  
			for(int i = 0; i < numPixels; i++) {  
				if(pix[i] < nearThreshold && pix[i] > farThreshold) {  
					pix[i] = 255;  
				} else {  
					pix[i] = 0;  
		// update the cv images  
		// find contours which are between the size of 20 pixels and 1/3 the w*h pixels.  
		// also, find holes is set to true so we will get interior contours as well....  
		contourFinder.findContours(grayImage, 10, (kinect.width*kinect.height)/2, 20, false);  
   for(int i=contourFinder.blobs.size()-1; i>=0; i--)  
        vector<ofPoint> pts = contourFinder.blobs[i].pts;  
        ofxBox2dPolygon poly;  
        for(int i = 0; i<pts.size(); i++)  
        poly.setPhysics(1.0, 0.3, 0.3);  
    if((int)ofRandom(0, 10) == 0) {  
        ofxBox2dCircle c;  
        c.setPhysics(0.3, 0.5, 0.1);  
        c.setup(box2d.getWorld(), (ofGetWidth()/2)+ofRandom(-20, 20), -20, ofRandom(10, 20));  
void testApp::draw() {  
	ofSetColor(255, 255, 255);  
		// draw from the live kinect  
		// kinect.drawDepth(0,0,1024, 768);  
		//kinect.draw(0, 0, 1024, 768);  
    for (int i=0; i<triangles.size(); i++) {  
	// some circles :)  
	for (int i=0; i<circles.size(); i++) {  

Since your blobs are unique per frame you will probably need to clear the rectangles every time.

try putting this at the top of update()


thank you. Still the speed is really slow and I am not getting the triangles on the body it is outside of the shape. I can’t figure out what is wrong with the code maybe I am missing something.

can anybody pls share the code. I am stuck on getting triangles around the contours. pls help

Hello suhit,

Did you figure out what was the issue with the speed?

I have the same problem now :-/


i know this is an old thread, but since i derived my code from the code provided by @jvcleave , perhaps someone knows what i am doing wrong.
I am getting an error at the line
vector pts = contourFinder.blobs[i].pts;

error C2440 cannot convert from ‘_Ty’ to vector allocator

I’m not sure what _Ty is , but the blob points vector should be ofVec3f.
The block of code is exactly what suhit has above. thanks for any help.

the full error is
Error C2440 ‘initializing’: cannot convert from ‘_Ty’ to ‘std::vector<ofVec3f,std::allocator<_Ty>>’

apparently some things have changed since i last used OF.

with the new syntax, that becomes:
auto pts = contourFinder.blobs.at(i).pts;