It has been soooo long since I have posted here. Being back in the OF world feels great!
I want to grab photos from at least 15 cameras at once. I’m trying to achieve this without breaking the bank, therefore I’m not looking at DSLR, etc. So far I have been getting ok results using 15 Logitech c920x plugged to the same Windows computer (using pci-e usb expansion cards), but I not able to get a 30fps streams from these cameras (from what I understand - YUY2 over usb 2.0).
Getting a 30fps stream would allow me to have a better UX (as the 15 streams are displayed as feedback to the user), but also to get less bad captures (blurry ones because the subject moved during the capture), at least I hope.
Would anyone recommend:
a specific addon/way to reach 30 fps?
any other cameras that would be better for that use case?
Am I correct to think that in order to get the best stills possible I should try to keep using YUY2 vs MPEG or H264? Any other codec I should be looking at?
I personally dislike this specific camera model. I prefer the cheap C270 as they are a lot more responsive.
Take a look at illumination too as it can make longer exposures and lower framerate.
I’m doing experiments with 4 super oldschool ps3eye cameras, I just love them. ultra responsive and very low resolution. lots of manual control so you can have steady results from different cameras.
Actually, for my performance I ended up borrowing the Logitech BRIO from work. It didn’t run at a higher fps or resolution, but still looked noticeably better than tbe c920… it’s a bit of a price jump though.
The issue I’m facing now is that I’m not able to run as many cameras. The CPU usage is way more since using MJPG. It reaches 100% at 9 or 10 cameras (on a i7 6700k). When using YUY2 it was max 60% with 15 cameras.
Having issues getting the lib into the OF nightly builds right now but if you download the master videoInput branch and build for x64 the compileAsLib project you should be able to swap out the videoInput.h and videoInput lib in OF for the ones you compile and get the H264 feature.
Then you would do: VI.setRequestedMediaSubType(VI_MEDIASUBTYPE_H264);
I’m still going to work on a way to make this easier to do from OF but just wanted to give you a chance to try it out. You might get better performance / quality than MJPG
I wonder why the fps is still slower when its not using much of the cpu? Maybe because it’s transferred to the GPU? So the GPU is maxed out with the decoding, but the CPU is fairly un-impacted as a result?
Looking online I see some threads mentioning “RightLight” setting needing to be disabled in the webcam settings. I would also check things like auto-exposure etc to make sure its not set. Often having a long shutter speed can reduce the FPS. But I am guessing if you have got it working with MJPG you’ve done all of these things.
One more thing to try is disabling power management for usb at the bios and/or Windows Power Management settings.
But most likely the bottleneck is in DirectShow.
Edit: one more thing, definitely build in Release if you haven’t been.
No, using texture does not help.
Even if I set only 1 of the cameras to use a texture (and to render it on screen), each time I add/open another camera (with setUseTexture(false)) the CPU usage increases by 10% .
Ah okay - I guess that all makes sense considering as it is an HD stream.
videoInput only allows RGB or BGR output pixels, so the Directshow graph is converting the pixels to that format. That is probably where some of the CPU overhead is.
You might have luck using gstreamer or another capture pipeline that can load the texture with less decoding steps, but at this point I would maybe look at other camera options.
You could try building OpenCV ( fairly easy via cmake ) with video capture support and see if their pipeline is faster. I believe they have a MediaFoundation pipeline which is a bit more modern than videoInput.
I do think 10-15x HD might be a strain for most pipelines TBH.
On thing to check is light, if the camera is on auto and not getting enough light it will hold the shutter open longer and look like a lower frame rate (and likely also output a lower frame rate). It would be worth running the test in a lot of light (like full sunlight if it is available in your location at the moment). Small sensor cameras with smaller pixels need more light, and in automode this will result in a lower frame rate (I am not sure if the camera will output 30fps when it is capturing less due to low light). The rightlight that @theo mentioned is controllable by the Logitech Capture app, as are all the other camera controls.