Stereo Vision

Hi guys, I’d like to do some research in stereo vision and I would like to create a compact solution at least Mac and Windows compatible.

I’d like to connect two webcams only, the fire-i (http://www.unibrain.com/Products/Vision-…-e-i-DC.htm) recommended by Theo on this thread:

http://forum.openframeworks.cc/t/multiple-live-video-advice/499/0

look great, the fact that are firewire though, I presume, will make them quite hard to handle on Windows. The quad option is definitely too big, I really would like to be just two USB webcams.

I presume a notebook without extra USB card, could handle a couple of webcams at 320x240, couldn’t it? If so, which webcams would you recommend?

Thanks a lot for any suggestion, chr

Hi!

I pretty sure Fire-I works on windows with videoInput - can anyone who has one test?

Best bet for webcams and mac would be to cross check against this list:
http://wiki.openframeworks.cc/index.php-…-cam-Driver

Pick one that seems decent and has good support with that driver and you are pretty much guaranteed it will work just as well on windows. I have had good luck with both Logitech and Phillips SPC1000NC wideangle camera. The Phillips is quite nice because it can do up to 1024 768 and it has manual focus on the lens itself.

hope that helps!
theo

Hi Theo, the phillips SPC1000NC doesn’t appear on the supported list. It seems interesting, do you think you can set from mac a lower resolution and then use two with one USB card? The Fire-I is pretty pricey here in UK. 80º angle is pretty remarkable, also the Microsoft VX-6000 that is declared as wide has only 71º. Should I order 2 phillips straight away? :slight_smile:

Cheers, chr

its not listed but it does work with the latest maccam drivers.
don’t know about two at the same time though - I don’t see why it wouldn’t though :slight_smile:

Yeah the 80 degrees is pretty nice - super wide. love the manual focus too.

Not sure how much of the auto image adjustment you can turn off - is that super important for what you are doing?

I can always test that with the one I have here.

cheers,
theo

Hi Theo, as any computer vision setup, disabling the auto image adjustment is super important. If you can check that, it would be great and I will order 2 straight away! :slight_smile:

Thanks, chr

The goal of using two web cameras for stereo vision is a good one, but challenging. Here are some thoughts based on my recent experience.

The first thing you should know is that there are several different flavors of stereo vision equations. These are extremely well-explained in Trucco & Verri’s “Introductory Techniques for 3-D Computer Vision” book from 1998. The basic thing to know is that there is the “simple” stereo and then the “true” (epipolar) stereo equations. The “simple” equations assume that your cameras are parallel, which is to say, that their optical axes are exactly parallel, and that they are at exactly the same height, zoomed the same, and so forth. The epipolar stereo equations allow for cameras which may be pointing in slightly different directions – but the math involved is much more complex.

Last summer I attempted to use two security-style cameras for stereo vision. I used $100 board cameras from Supercircuits and digitized them through 2 ImagingSource capture boards using Theo’s videoInput.

The major problem I encountered was that it was EXTREMELY difficult to align the cameras well enough to use the simple stereo math. And I had really precise mechanical alignment tools! The awful thing is that errors of a small fraction of 1 degree would lead to parabolic error. In other words, the error in depth estimation skyrocketed according to the square of the distance from the camera.

Eventually I was able to get some reasonable accuracy by solving for and then incorporating a parabolic model of the error. One of the things which made this easier is that the board cameras I was using had lenses with fixed-focal (non-zoom) and pre-focused lenses. Even so, the calibration procedure was pretty grueling.

I predict one of the major challenges in using a pair of web cams will be mounting them in such a way that their axes are aligned and their zooms/focuses are equal. If you can handle the math (which I chickened out of), I would also go with the true epipolar stereo equations rather than the simple stereo.

Currently I am using a Point Grey Bumblebee2 for stereo vision. The advantage of the device is that the two cameras are already very precisely positioned and calibrated. Another advantage is that the Point Grey SDK is quite fast (it’s based on IPP) and does a decent job at finding XYZ positions of objects in the scene. The very obvious disadvantage, of course, is that this camera costs around $2000. In this case I am substituting money for time. I got the Bumblebee2 working with openFrameworks in only a few hours, and I am now getting highly-accurate (16-bit), 320x240 stereo depth maps at 50fps – without doing any math, either.

golan

Hi Golan, how are you? It is a long time (I’m the Italian Christian :wink:

Cheers, chr

Surprisingly on mac the camera isn’t doing any auto irising the I can tell. I used the ofxOpenCV example to test and grabbed the background then tried covering uncovering the lens etc to try and trigger auto adjustments. Couldn’t get it to do any auto stuff which is good.

There is a small amount of pixel noise though that shows up as white pixels when I have the threshold set bellow 40.

Thanks for the stereo vision info Golan!
Didn’t realize it was that hard but it makes a lot of sense.

Those Point Grey Bumblebee2s look really nice - I’m jealous :slight_smile:

Little offtopic:
Nice video from cambridge university doing 3d tracking with two wii controllers.
http://www.youtube.com/watch?v=CT6aQN-lwmo&NR=1 I am looking forward to playing with those for tracking ( 100fps 1024 768 IR cam for $40! )

Theo

Thanks for the info Theo. About that experiment, well, it is definitely on topic!!!

Cheers, chr

Hi Theo, I received the SPC100NC and my mac doesn’t seem to recognize it. While it recognizes an older Phillips I have. I’ve the last version of Macam(0.9.2). I’ve a Macbook Pro (Intel). Is there any thing in particular I should do to make it work?

Thanks, chr

Hey - the macam demo application doesn’t work for me either - but the quicktime component works for me and lets me use it with any of the openframeworks apps.

Try installing the component and then running the movie grabber example.

cheers!
theo

Actually it works with the example. My listDevices() though doesn’t work. Are you aware of any problem with mac? it says:

“error in list devices, couldn’t allocate grabbing component”

On windows works fine.

I’ll search on the forum, of course knowing their ids would be pretty important.

Cheers, chr

Chris pinged me offline about the Bumblebee and so I thought I’d answer his questions here.

- you mentioned that you can track decently objects in a 3d space. Do
you think you can do a multi-touch-less interface (a la Jeff Han but
with also the distance of the fingers to from the screen as a data,
probably more a la minority report).

I didn’t say I was “tracking objects” decently with the Bumblebee – only that I was getting high-performance disparity maps. The tracking is something that I/you have to write the code for myself, and that’s the hard part which, so far, is slowing down my CPU a lot.

In theory, you could make such a finger-tracking system with the Bumblebee, but there are a lot of practical difficulties for this idea. Jeff’s system works so well because he has a nice, clean, high-contrast way of finding the finger tips. With his frustrated-internal-reflection system, the fingertips show up as white blobs on a black background. It doesn’t get better than that.

Watch out: The depth-from-stereo stuff is GRAYSCALE (meaning lots of continuous data, not easily segmented), very NOISY, full of actual ERRORS, has lots of HOLES (undefined regions), and requires lots of post-processing. Here’s a representative example – my own results are similar to this: http://www.vincent-net.com/gaile/papers/CVPR98/img3.gif. Tracking fingers in 3D with this kind of noise is not something I would be keen to try, given my experience with the Bumblebee2 so far. Here’s a capture of me holding out my hand: http://www.flickr.com/photos/golanlevin/2564500689/

_- you said that it was few hours to make it work with OF. I presume
for me it would be 1 week :slight_smile: Which kind of resources would you
recommend me to manage it? _

I’m certainly happy to share my Bumblebee-capture code for OF. I’ll post it someplace when I get around to it (I recently made a section of my own web site for this kind of code.) To clean up the results and extract meaningful information from it, I wouldn’t want to be caught dead without OpenCV and IPP.

- about the angle, in the docs it says “3.8mm (65° HFOV) or 6mm (43°
HFOV) focal length”, do they provide two lenses?

No, you pick which camera model you want to buy. It comes with lenses that are fixed, pre-focused and calibrated. You can’t change or adjust the lenses. Also, as far as I can tell, you can’t turn off the automatic gain control, which is annoying me right now.

- which is its max distance?
Uh, depends on which lens version you get, but I think it will work with features that are maybe up to 5 or 6 meters away. There’s also a high-resolution version (1024x768 instead of 640x480) which will resolve smaller (i.e further) details. Costs more and will be really slow, I’ll bet.

_- Do you make it work on a PC (with a firewire board)? _
Yep, that’s how I’m using it.

- Is it powered by the firewire?
It doesn’t have to be. But it does require the 8-40 volts typical of Firewire. When I use it with my desktop, the Bumblebee camera gets its power from the Firewire port. When I connect the camera to my PC laptop (which only has the stupid powerless 4-pin Firewire connector), I use this cable (http://www.usbfirewire.com/Parts/rr-fw-ipod-adapter.html) and a standard 12V power adapter to power the camera.
_

  • Does the calculation happen inside the device, and thus it doesn’t
    use at all the computer CPU?_
    Ha ha, nice idea. No. All of the stereo calculation happens on the CPU with the Point Grey libraries. The stereo calculation takes about 25 milliseconds on my dual-core 2.4Ghz desktop. I’m thinking of threading the capture to speed things up a bit.

Best
golan

Thanks a lot for all the info Golan. Pretty impressive the noise. It seems pretty impossible finger tracking with those data but I’m sure a super expert like you could have more chances. I look forward to see what you are up to and I will think about it.

Cheers, chr

Hi All,

I thought to show some of the strengths and weaknesses of the Bumblebee, I would post a few projects we have made with the PTGrey camera here at United Visual Artists:

“Echo”

http://www.uva.co.uk/archives/48

“Interactive Installation Prototype”

http://www.uva.co.uk/archives/32

“Colder - To The Music”

http://www.uva.co.uk/archives/11

As Golan said, there are many disparities in the three dimension image you receive back, but the alignment of the lenses is superb, very solid piece of metal indeed.

The interactive installation prototype demonstrates the level of tracking you can get, and the Point Grey SDK does allow you to combine several 3D point clouds at once. I have never attempted it.

BTW threading the Point Grey code was the only way we could get it to function at an interactive rate.

I’m trying to use a Minoru stereo webcam, but so far just about everything I tried failed miserably… there are lots of little problems that keep on appearing, such as the drivers’ automatic white-balancing and the less-than-perfect alignment of the cameras. I still feel it should be possible to get it working, but it’s going to take much more work than I hoped.

what OS are you using it on?
theo

I’m mostly developing under windows, although I believe that linux drivers have recently become available.