i have a setup with two kinects looking down from the ceiling on to the floor.
the kinect are about 4 meters apart in a 5 meter height room.
i want to track a person in the whole space.
i use opencv to blob track in the thresholded depth image, then transfer the track points in to the 3D world of the 2 point clouds.
i offset the point cloud to each so that they overlap around the middle of the room.
by offsetting the point cloud i automatically also offset the track points.
tracking works fine.
but when a person is in the middle of the room i sometimes get blobs in both depth images and as a result in both point cloud. with my overlap i fix that to a certain degree. but it is not perfect.
are there other ways of creating a continuous tracking area/space with multiple kinects?
I’m working on the same thing now, and am just stitching the two images together in an FBO before outputting as a cv::mat type.
I’m doing a few image manipulation things too, like adding a gradient to the image to compensate for the “tilt,” dewarping, and skewing/ROI before stitching. Ya, you definitely just want to hand one image to opencv though.
what i am doing is run each thresholded depth image separately through opencv.
then use one origin as reference to which all the track points are offset from. different offsets for different kinects.
then i check if the track points for one kinect are super close to the one of the other kinect.
if they are i merge their positions.
in your method of stitching how do you decide how far to overlay?
isn’t different depending on the height of the object you are tracking?
that’s why so far i am trying to work with point clouds and place the track points also in the point cloud 3d space.
One way to deal with it is to register one kinect’s frame to the other, using pcl-type ICP-based point cloud registration for some calibration conditions (for example, subject standing in the intersection fo the kinect FOV), and then apply the resulting transformation matrix to one of the depth images, and merge the resulting depth images to a single depth image. Then feed that into opencv etc.
I can’t share the code at this point due to pure technical reasons, however you could implement this so that no actual pcl is involved. Assuming the kinects are not moving relative to each other:
get a single frame from each kinect when some object is seeing by both of them – total 2 frames. remove things like floor using thresholding to get the distinct features of the subject
export them as obj files (for example, using roy mcdonald’s obj exporter)
import two point clouds into meshlab;
use meshlab to register one point cloud to the other using a combination of manual alignement and ICP – there is a youtube tut on how to do that;
you will get a transformation matrix;
apply transformation matrix to one of the kinect’s feeds in your program (in the world coordinate space)
merge the above point cloud with the other one;
convert the combined point cloud into camera view space;
Hello, I am also trying to stitch together the feed from two kinects into one grayscale image and then send it to openCv. My kinects are also mounted to the ceiling, and the reason I am using two is that my ceiling is not high enough for one kinect to track a large space. I am interested stitching the two images together in an FBO before outputting them as a grayscale image, just like you said. How do I go about doing this?