Stitching many cameras together for ofxOpenCv blob analysis

Hello oF-Forum
I have some more questions. I want to do a webcam stitching application using 2 or 3 webcams. I think I can get all the webcams working together and do all the stiching inside an FBO in openGL. What I don’t know how to do is capture the usable portion of the resulting frame and send it to ofxopenCV in order to do blob detection.
I’m guessing the steps are:
1.- Capture webcam frames.
2.- Draw frames onto a FBO and warp them.
3.- Capture the resulting usable portion of the frame into another texture or something. (Need help 1)
4.- Send the new texture to ofxOpenCv. (Need help 2)

I’ve attached an image of what I wish to accomplish and I will upload the source when I have anything usefull.
I would also like to ask for the technical concepts and requirements for this, for example… do all webcams need to be the same? Why? (This because I wish to learn :wink: )

I also recently bought a couple of point grey cameras but I would like to try this application using very cheap webcams. Also, for personal reasons I wish to do this using nothing more than openF v0.07

Thanks! =D

(Image copyright… definitely NOT mine, just found on google)

Rough code… I’m sure OpenCV has it’s own method for this, it always does.

Assumes you have a combinedFBO filled with your tilted and morphed webcam images.

  
  
ofTexture stitchedImage = combinedFBO.getTextureReference();  // read the FBO texture  
ofPixels stitchedPixels; stichedPixels.allocate(combinedFBO.width, combinedFBO.height, GL_RGBA); // pixels type that has an easy crop() function  
  
stitchedImage.readToPixels(stitchedPixels); // read texture from combinedFBO into pixels type  
  
stitchedPixels.crop(startX, startY, sizeX, sizeY); // crop, does a resize  
  
ofImage sendToOpenCV; // what we want  
sendToOpenCV.setFromPixels(stitchedPixels); // set it right.  
  

It seems promising memphistechno. Thanks!

I do have a question regarding the method. In the single camera “opencvExample” there is mention of a bool that checks if the actual frame is a new frame:

  
 vidGrabber.grabFrame();  
bNewFrame = vidGrabber.isFrameNew();  
  

I’m guessing that even if two webcams are the same, they won’t necessarily stream frames at the exact same rate.
Does one have to deal with these types of sync problems?.
=D
I’ll try your method tomorrow… I’m gonna go buy some webcams! :stuck_out_tongue:

I think that method exists to make sure your 900fps program doesn’t try to process your 30fps webcam stream at 900fps. I don’t think camera sync is going to be an issue in this program, I’d let it run and see what happens.

The only webcam class camera I have seen to give a predictable sync is the playstationEye. Which according to structured light demos, can provide a reliable 60fps.

Also I would investigate masking in OpenCV. All of my uses for it are specific and limited, but I have come across numerous fast masking and cropping methods specifically implemented in openCV for openCV

Sure, I will check all that you have mentioned. Thank you :smiley:

Everything seems to be working with two cameras but I have three identical Microsoft LifeCam HD-5000. I bought them to experiment with this and see where it might fail. I’ve tested them but I don’t know what to make out of this. I also tried connecting the webcams to a usb powered hub and directly to my computer. I also swithed them many times and discarded any hardware failures.

What do you guys think?
Should I use some other custom drivers?
Do custom drivers exist? =D

The problem is here in device 2. As you can see it starts using it but exits inmediatly. It fails on “ERROR: Could not start graph”

[tt]
***** VIDEOINPUT LIBRARY - 0.1995 - TFW07 *****

SETUP: Setting up device 0
SETUP: Microsoft LifeCam HD-5000
SETUP: Couldn’t find preview pin using SmartTee
SETUP: Default Format is set to 640 by 480
SETUP: trying format RGB24 @ 640 by 360
SETUP: Capture callback set
SETUP: Device is setup and ready to capture.

***** VIDEOINPUT LIBRARY - 0.1995 - TFW07 *****

SETUP: Setting up device 1
SETUP: Microsoft LifeCam HD-5000
SETUP: Couldn’t find preview pin using SmartTee
SETUP: Default Format is set to 640 by 480
SETUP: trying format RGB24 @ 640 by 360
SETUP: Capture callback set
SETUP: Device is setup and ready to capture.

***** VIDEOINPUT LIBRARY - 0.1995 - TFW07 *****

SETUP: Setting up device 2
SETUP: Microsoft LifeCam HD-5000
SETUP: Couldn’t find preview pin using SmartTee
SETUP: Default Format is set to 640 by 480
SETUP: trying format RGB24 @ 640 by 360
SETUP: Capture callback set
ERROR: Could not start graph

SETUP: Disconnecting device 2
SETUP: freeing Grabber Callback
ERROR - Could not pause pControl
SETUP: freeing Renderer
SETUP: freeing Capture Source
SETUP: freeing Grabber Filter
SETUP: freeing Grabber
SETUP: freeing Control
SETUP: freeing Media Type
SETUP: removing filter Microsoft LifeCam HD-5000…
SETUP: filter removed Microsoft LifeCam HD-5000
SETUP: freeing Capture Graph
SETUP: freeing Main Graph
SETUP: Device 2 disconnected and freed

OF: OF_LOG_ERROR: error allocating a video device
OF: OF_LOG_ERROR: please check your camera with AMCAP or other software
[/tt]

Triple checked with AMCAP. Updated firmwares on all the webcams and updated the lifecam software. Independently they work great. Using two I have no problem. I cannot get three working as I mentioned.

I’m using Windows 7 SP1 x64 and compiling using CodeBlocks 10.05

it would be really cool to see this applied to images from multiples cameras :
http://blog.inspirit.ru/?p=343

I’m not sure what you mean paranoio.
I’ve attached a screencapture of my progress. The sunglasses are for debugging purposes of course.

This is realtime, coming from 3 cameras connected to 1 computer, the warping is done using the mouse.
I still have that problem with the third camera but everything else is working.

This part doesn’t work for me, I have to use an openCV ofxCvColorImage, not an ofImage.
Any thoughts?

Edit---------------->
Don’t worry, I got it working with another method. :stuck_out_tongue:

@irregular
have you solved this?
you can use the ofxCvColorImage::setFromPixels(const unsigned char * _pixels, int w, int h )
then you must call ofxCvColorImage::flagImageChanged() which updates the iplimage data inside ofxCvColorImage.
if you don’t call the latter method any opencv function you call might not work.

Any luck with getting the 3 cameras to work?

cheers

@Roy
Yes I did solve this. I used a intermediate ofPixels like:

  
ofPixels paso_intermedio;  
paso_intermedio.allocate(w,h,image_type);  
FBO_de_paso.readToPixels(paso_intermedio);  
colorImg.setFromPixels(paso_intermedio);  

I think it is not very efficient so I will try the method you mention.

About the cameras, I moved that question here:
http://forum.openframeworks.cc/t/many-usb-cameras-at-the-same-time-in-one-application/8179/0

i mean i will like to see the same you did but with automatic aligment like in the link i posted :slight_smile:

oh! yes that would be awesome! xD

I’d say it is working :stuck_out_tongue:
Althought it eats processors xD
No sunglasses means I’m finished debugging…for now.

so now you can create a 360 installation ?

well sure! :smiley:
The steps would be:
1-Expand the code to use more cameras.
2-…
3-Profit.

I don’t have much experience but I think Step 2 would envolve mapping the cameras onto a 3D cylinder. I also guess you could make a spherical video like the ones posted on the internet.

i guess its not a cilinder, just by creating panoramic video frames then using a
special player in client side to allow users watch and turn around bt moving the video.

but for the real 360 i have no idea .

it would be really cool to use your multiple cameras code to stream one video to some CDN server using OF :smiley:

Here is some OF stuff (and processing) that works with the Sony Bloggie that may help

http://www.flong.com/blog/2010/open-source-panoramic-video-bloggie-openframeworks-processing/

i am also trying the same thing. i am able to run the cameras simultaneously.
can you please share the code of combining multiple input streams.