[SOLVED] multiple cams (ps3 eye) on Ubuntu 12.04 64 bits

Hello all!

Currently I am working on an app in which I wish to track someone in space using 2 cameras, that will have a 90 degree difference in viewing angle. I want to be able to pinpoint when two people are in touch, and when out of touch.

For this I wish to use two ps3 eye cams. I have previously set out to accomplish this, but n the end at that time did not need two cams (see: http://forum.openframeworks.cc/t/multiple-ps3eye-cams-on-windows-7-64bits/7790/0). I have now applied the change to ofGstVideoGrabber::get_device_data() suggested by Tim S.

At this point I see some problems, some unexplained behavior, and a little success:

  • Hardware: pc with intel quad core, Ubuntu 12.04 64 bits, usb ports at the back, 2 at the front and extra with a powered Sitecom hub.

  • Plugging two ps3 eye cams into the two ports in the front of the pc, I can run an app with the videoGrabbers, but it appears the load is too high on the USB: only one cam at a time gets updates, and then the other hangs a few frames (unusable).

  • Plugging two ps3 eye cams at different places in any combination except for the previously described: the application hangs at startup, then after some 15 seconds crashes with “Segmentation fault (core dumped)”.

  • Plugging one ps3 eye cam and a small Logitec webcam, it seems to only work fine when the ps3 eye is plugged into a usb port on the front of the pc, and the logitec is in the usb hub. However, this means at least one setting that actually works fine!

  • Note: using two instances of guvcview, I can view streaming video from two ps3 eye cams (both plugged into the front usb ports) at 30 fps at 640x480, or 125 fps at 320x240. Since there is limited control inside OF over video grabber settings, I do not know how to set grab framerate. Using "setDesiredFramerate(30) does not improve results.

Does anyone else have experience using multiple cams inside OF on Linux? And maybe experienced similar issues? I know the ps3 eye can be heavy on data traffic (potentially uses full usb2.0 bandwith), so it could be related to that, but the behavior I get is quite diverse and unpredictable. Maybe putting an extra pci-usb card in my pc could solve this? Or it could be something in the drivers behind it all (gstreamer)?
I would like to keep it inside Linux, since the SDK for ps3 multicam in Windows is closed source and nonfree. Also I don’t have the means to switch to firewire cams, I think.
If I find the time, I will do a similar test with ps3 eye cams using Pd/GEM to see if results are better or similar.
Thanks for any feedback.

Another issue I’ve run into on linux with the gst video grabber was that setDesiredFramerate was being ignored. I submitted a fix to github but it hasn’t made its way to the 007 release I don’t think. Here’s the relevant issue: https://github.com/openframeworks/openFrameworks/issues/769

This might help by allowing you to set a lower framerate and thus lower data rate for the USB bus.

[quote=“Tim S, post:2, topic:9683”]I submitted a fix to github but it hasn’t made its way to the 007 release I don’t think. Here’s the relevant issue: https://github.com/openframeworks/openFrameworks/issues/769[/quote]
Well considering that 007 was released ~9 months ago (or even longer? I just looked at the github tag), and you filed this issue 7 months ago, it would be impossible for 007 to contain the fix. :wink:
Anyway, I checked, and the fix arturo committed is already in the master branch, so you could already use it by using OF master from the git repo, and you can be sure that the fix will be included in the upcoming release, 0071. :slight_smile:

I just wasn’t sure how often, if at all, bug fixes get packaged into the current zip file on the of website.

Thanks for the replies! Moving the call to setGrabber() inside the ofVideoGrabber.cpp file to the contructor fixes the issue for me. I can now have two ps3 eye cams connected and it runs as desired on 30 fps (which is fine for my current purposes).

A small note on the side: the call to ‘setDesiredFrameRate()’ must be before initGrabber(). Otherwise, the framerate will not be as desired.

Continuing to explore this, I have reverted the change to ofVideoGrabber regarding setDesiredFrameRate().

With my current setup, the change resulted in unexpected and hard to debug segmentation faults. Initially I got things working with 2 ps3 cams. Upon running an app, it took some seconds for the app to start, but eventually things got running at 30 fps.

However, when adding initialisations for some floats in testApp::setup(), I get segmentation faults. The debugger points to ofVideoGrabber.cpp and parts of Gst. Maybe this has to do with some design flaw inside my current project, though I was unable to find anything, and reverting the change solves things. Also I now use one ps3 cam and a small Logitec. Now there are no seg faults…

I have not dug deeper into this, cause I wanna spend time writing my app rather than digging into gstreamer code :-). Later, I think I will create two apps and run them on seperate computers, sending relevant data via OSC. It seems more stable that way.

Anyone else have experience with this kind of issues?

you could always collect your findings and report a bug (https://github.com/openframeworks/openFrameworks/issues). maybe arturo knows what’s going on.

have you tried with the develop branch in github? there’s several issues that have been fixed for gstreamer there and the video should be faster in general now

I did not try the develop branch yet, will look at it later.

Meanwhile, I solved this issue by implementing V4L2 directly into OF, creating a class from this example: http://v4l2spec.bytesex.org/spec/capture-example.html.
See attached sources (uoloading .tar.gz did not work).

Currently it only captures grayscale, because I have yet to do some reading about the YUYV pixelformat and how to convert the data in a V4L2 buffer into an RGB array of chars. I am not (yet) familiar with the V4L2 API, and with what is the difference between pixel formats and io methods (now I use the MMAP method).
However I am able to initiate cameras and load grabbed data into an array of unsigned chars for use in OF.

Attached is the current class and a small example app. With my setup (Ubuntu 12.04 64 bits, quad core intel q6700, with one cam plugged into a Belkin pci usb card), this works without problems at 30 fps and 640x480 with four ps3 cams simultaneously. I do have to use the right combination of usb ports: putting more cams on the same hub results in flickering images, I think because buffers get overloaded. It also works at 30 fps with 6 cams (five ps3 eye cams and one small Logitec cam). Maybe it would work with more cams, however, I have no more cams :-D.
To build the attached app, you will need package libv4l-dev installed. Also, it needs the -fpermissive compiler flag to build, because of using void * and things like that.

Using this small app http://sourceforge.net/projects/v4l2ucp/, I can also change all settings for every individual cam, except for frame rate. It is easy to build using cmake. When run, it opens a seperate Qt dialogue. Later I hope to find a way to get things working with two cams at 60 fps at 640x480, which should be possible, just I do not yet know how to set grabbing framerate (will take a look at arturo’s addons for V4L2 settings).

I am not sure if this is better or worse than the improved Gstreamer implementation, but things do seem quite stable.

On a side note, I found out that I have to sometimes clean before building, or rebuild everything, to get weird segfaults out of the way. I guess that’s normal, I just didn’t know :-).

Thanks for all the help! I am happy to hear any ideas (implement color? change/set framerate?) about the V4L2 class! However that dicussion may belong to the ‘extend’ part of the forum…

Menno

ofxV4L2.cpp

ofxV4L2.h

testApp.cpp

testApp.h

main.cpp

regarding V4L2, hvfrancesco has this already implemented for his projection mapping tool, so I guess there’s synergy/code to be had: https://github.com/hvfrancesco/lpmt

Browsing the source of ‘lptm’ by hvfrancesco I don’t see where V4L2 is implemented. I do see he created a tool to stream video as if it is coming from a V4L2 device (ofxGstV4L2Sink), but I guess it is the other way round from what I want. Am I overlooking something?

no probably not. maybe he would appreciate being able to offer “the other way round”, too, in his app.
Also, the OF-V4L2 implementation could be held together in one place, instead of being reimplemented all over the place, so I guess what I’m driving at is talk to him and see where your needs intersect, and unite the code base, or whatever.

Ok that sounds like a good idea. I will contact him and see what is possible. I agree it would be nice and convenient to collect the code regarding V4L2.

In case anyone is looking into this: I’ve put an updated version of the class on github.

Instances of the class now can be used much like an instance of the ofVideoGrabber class, using methods like initGrabber, grabFrame, isNewFrame and getPixels. A call to initGrabber needs a device address (like ‘/dev/video0’), and an io-method (there are three, see the defines in the header). I also implemented a setDesiredFrameRate method (which works).

Note that it still features only greyscale video. I will update the source when color is implemented.

Link to github: https://github.com/mennowo/ofxV4L2.

Great to have this! I have a potential project in mind where I can use this. One thing though: it would be useful to make your addon’s structure align with the folder structure recommended in http://ofxaddons.com/howto. That way it would be easier to, e.g., add examples.

Ok, I revised the file structure so the repository has a src and an example directory. Also added an example. Hope this works elsewhere like it does here!

A sidenote: I am unsure whether the grabFrame() method is the best way for this. Not sure how ofter OF probes the class for a frame, and what happens if nothing is there. It works fine, so I think the way I have implemented the isNewFrame() method is ok, but it would be good if someone could take a look.

isNewFrame should return true everytime there’s a new frame till you call update again, so if you call isNewFrame twice, it should return true the two times and then the next update should set it to false if there’s no new frames

The way it is implemented now, is that it does work this way, provided grabFrame is called first. The grabFrame method resets the newframe boolean and then sets it to true only when a new frame is loaded. Then the boolean is returned by isNewFrame.

Thanks mennowo great work.

Just a note for anyone watching this thread - I’ve got two ps3 eye cameras running OK on Kubuntu 12.04 using the ofxVideoGrabber class shipped with oF 0071 by plugging the cameras into separate internal USB hubs, so it certainly seems like a bandwidth issue.