Multiple kinect v2 overlap Calibration

any one had experience working with multiple kinect v2? i.e three kinect with three computers,each of the them will send blob&contour info over the network.there will be some overlap issue,is there any way or practical method can doing calibration for continuous tracking while people walking across it.

any ideas or suggestion would be great
cheers

This addon works well for doing blob tracking in 3D:

Perhaps you can run it on each machine and compare the centroids of each blob. If the centroids are close enough you can assume it is part of the same blob.

I am dealing with kinect overlap right now as well except with 2 version ones on the same computer and they are not looking down on the scene, but rather looking at it.

I have created a merged kinect class that combines them together. Here is how I draw the point cloud:

int step = 2;
int distanceZ;
for(int y = 0; y < kinectOnePtr->height; y += step) {
	for(int x = 0; x < kinectOnePtr->width; x += step) {
		distanceZ =kinectOnePtr->getDistanceAt(x, y);
        if(distanceZ > 0) {
            tempPointCurrent = kinectOnePtr->getWorldCoordinateAt(x, y);
            
            //greyScaleValue = kinectOnePtr->getDepthPixels()[y*kinectOnePtr->width+x];

            
            //mesh.addColor( ofColor(greyScaleValue, greyScaleValue, greyScaleValue ));
            mesh.addColor( ofColor::red);
            mesh.addVertex(tempPointCurrent);
            tempPointCurrent = esyCam.worldToCamera(tempPointCurrent);
            
        }
	}
}

for(int y = 0; y < kinectOnePtr->height; y += step) {
	for(int x = 0; x < kinectOnePtr->width; x += step) {
		distanceZ =kinectTwoPtr->getDistanceAt(x, y);
        if(distanceZ > 0) {
            tempPointCurrent = kinectTwoPtr->getWorldCoordinateAt(x, y);
            
        
            tempPointCurrent.x+= ofMap(distanceZ, mapFromLowZ, mapFromHighZ, mapToLowZ, mapToHighZ)*transformConstant + kinectDistance;

            mesh.addColor( ofColor::blue);
            mesh.addVertex(tempPointCurrent);
            
            tempPointCurrent = esyCam.worldToCamera(tempPointCurrent);
        }
	}
}

The important line is the following for determining how the kinects overlap:

tempPointCurrent.x+= ofMap(distanceZ, mapFromLowZ, mapFromHighZ, mapToLowZ, mapToHighZ)*transformConstant + kinectDistance;

This moves over the x value of the second kinect depending on how far away it is in the z. I then have a slider gui system for adjusting the constants so that everything lines up.

Once you have them in a single merged point cloud drawn into a FBO you can then convert that FBO into a open cv grey scale image, which you can then preform your standard blob tracking on.

None of this really needs ofxKinectBlobFinder, but it would be nice to get it working with that to get the benefits of 3D rather than 2D blob tracking - I’m working on that now! I have code that gets the 2D silhouettes from the 3D blob, but I haven’t gotten it working with the merged point cloud yet.

You can calibrate the cameras with openCv as if these were stereo cameras.
I did it some time ago and it worked well, yet I cant find the code for it right now. Yet, there are several addons and projects for calibrating a kinect with another camera or a proyector. It is not very different from it.
If I find the code I used I’ll upload it.
Btw, you don’t need 3 computers, at least for the processing power, don’t know if multiple kinects v2 are supported.
best

em,the trouble is that the multipe kinect v2 can working on the same machine due to the usb bandwith issue. so i’m think maybe using the kinect 1 which should be working on the same computers(at least 3 i think?) i bought extra usb pcie card in case to working with more than 2 kinects.

for the calibrate part do i need to calibrate each of the kinect’s intrinsic parameters? or like Caroline said to stiched point clound togther? i don’t need skeleton tacking,only needs 2d silhouettes.
i’ll check the ofxKinectBlobFinder,thxs for those helps

alex

Hi,
I haven’t used the v2 so you might be right in that. Using v1 it has worked for me for using 3 kinects. Yet, make sure that you use each one on a different bus.

As for calibration. The idea with opencv is to get the extrinsic parameters, yet it is not easy to get it right.
On the other hand I found some code to stitch two kinects just by moving the pointclouds by hand.Archive.zip (11.4 KB) virtualKinectMultiTest.zip (89.9 KB)
best

Hi
i need also to do somthing like that to track a dancer with 3 kinect

how to merge the data of the 3 cam to creat one model of the dancer ?

1 Like

Hi @Caroline_Record, did you ever get ofxKinectBlobFinder working with the merged point cloud? I’m working on something similar, trying to do 3D blob tracking with multiple KinectV2’s. I forked that addon and got it working with ofxKinectForWindows2, and also made it work by passing in an ofMesh instead of being initialized with an ofxKinect instance since I thought that would be helpful with merged point clouds. Turns out it actually still needs a depth image for doing distance checks on neighboring pixels, but at least you could use the addon with any kind of depth camera now :smile: So I’ve still got some more work to do but am curious how this worked out for you!

I wish! I ended up using 2D which worked fine for our purposes. It seemed very possible but the 3D blob tracking used a lot of the built in kinect methods which would of needed to be re-done for the merged point cloud.