Hi all, I’m trying to stitch together 6 cams that are fixed to the ceiling in a 3x2 grid. (The end game is to pass the stitched image to opencv in order to track people walking around a floor space and then use that information to feed a synthesis engine.)
I was hoping to use the cv::Stitcher class but it looks like it is expecting images that have been rotated around an axis rather than translated along two.
The only thing I could find was this which illustrates what I’m trying to do but didn’t give me enough information to do it myself.
I can get the cams in, undistort and display them with ofxCv::Calibration but I’m a little stumped where to go next.
I’m experimenting with 2 cameras at the moment. Manually selecting regions from the streams and crop/pasting them into a new image.
It seems like this isn’t a feasible idea for tracking as the difference between the image overlap at head height and floor height is too great to get reliably persistent tracking at the boundaries.
Has anyone got any tips for how to approach this problem. I would like to be able to track an area of about 6m2 or greater but I will likely only have a ceiling to floor height of about 3m. I was thinking about angling a camera with a wide angle lens and correcting for perspective but I’m not sure how to deal with occlusion.
I have stitched images using transparency on the overlapped images and just doing it by eye to get a reasonable zone where you don’t loose people between camera frustums. It can be a little weird on the edges of the two images but if you are just trying to detect, “is there single person there” it can work well.
single high res camera with very wide lens might work better for you – there are some seriously wide angle lenses but you will loose alot of resolution as you undistort.
Thanks for the reply! I’d actually just ordered a fisheye lens!
I’m going to try it out with the PS3Eye. If I can get it to undistort satisfactorily and find that the res is too low, I should be able to beg/borrow a high res camera.
This looks promising for undistorting fisheyes.
I’ll around with the perspective idea while I’m waiting for the lens.
for higher res camera, I can recommend using a HD security camera and black magic capture device – you can get 1080p input and there are some really crisp lenses out there. I like running video over SDI, it makes installs easier.
Hi, I made an instalation some months ago in which I needed to track peoples contours. I ended up using three kinects v1. I stitched the pointclouds together manually and then reprojected the whole stitched point cloud. There was a lot of overlapping but this method gave me no problems with the frustrums and weird artifacts that would happen when just using a regular camera. I experimented with an automated stitching method, using opencv´s check boards but it didn´t give accurate results so I ended up doing it manually. I know that using some point cloud fitting algorithms a much better aligning can be achieved, as I experimented with it in the past but I had no time to implement it for this project. I can share some of that code, but it might be a bit messy. (quite “ghetto” as @zach would say ) I’ll post the link once I´ve uploaded it.
The would be brilliant, I’d be really interested in seeing how you made that work. I haven’t really played around with the Kinect cameras but I have a couple of ideas for projects that involve them.
I don’t have a massive amount of time so I think, for this project, I’m going to go ahead with the standard cameras. I’ve had some success blending two cameras together using alpha mode (thanks @zach!). I got some wide angle lens and modded the PS3Eye cameras to mount them so I won’t need to stitch as many. Seems to be working so far!
No doubt I’ll be back with some more OF questions. I’ve just moved over from Processing so I’m on a bit of a learning curve with a deadline fast approaching!
Thanks again for the pointers guys, look forward to seeing the pointcloud stitching code.
I’m also trying to wrap my head around stitching point clouds from multiple depth cameras, and would love to see an example! I’m having trouble finding resources that I can digest without a CS degree.
Also wondering if Kinect Fusion could be done with multiple cameras this way…
@ttyy If you want to automate it you’ll need to do some more complicated stuff although you dont need a CS degree. If done by hand it is just moving, rotating and scaling the different pointclouds and aligning them. This is done using some kind of gui. I used one called ofxManipulator, which is really nice. I’ll try to upload some code during the weekend.
best
Hey thanks so much Roy, I would love to see some sample code if you have a chance. I’ve been using ofxGizmo lately for 3d manipulations - haven’t tried ofxManipulator.
I’m interested in automatic fitting of point clouds (w/out checkerboard).
Is there a good method for that? PCL seems very intimidating.
Going back to the 2D stitching (sorry to hijack this thread) -
is there a good method for panorama image stitching somewhere in openCV or elsewhere?