Good usb camera (webcam I would say) to reach 30fps at a minimum of 1080


It has been soooo long since I have posted here. Being back in the OF world feels great!

I want to grab photos from at least 15 cameras at once. I’m trying to achieve this without breaking the bank, therefore I’m not looking at DSLR, etc. So far I have been getting ok results using 15 Logitech c920x plugged to the same Windows computer (using pci-e usb expansion cards), but I not able to get a 30fps streams from these cameras (from what I understand - YUY2 over usb 2.0).

Getting a 30fps stream would allow me to have a better UX (as the 15 streams are displayed as feedback to the user), but also to get less bad captures (blurry ones because the subject moved during the capture), at least I hope.

Would anyone recommend:

  • a specific addon/way to reach 30 fps?
  • any other cameras that would be better for that use case?

Am I correct to think that in order to get the best stills possible I should try to keep using YUY2 vs MPEG or H264? Any other codec I should be looking at?

Thank you!

1 Like

Hey @smallfly!

Welcome back :slight_smile:
Someone else ran into an issue with that camera not doing 30fps.

I think they got it to with this suggestion:

If that works for you I should def look at finding a way to make it settable at least with the ofDirectShowPlayer - or maybe raise the priority of that format so it is tried earlier in the iteration.

Hope that helps!

1 Like

I personally dislike this specific camera model. I prefer the cheap C270 as they are a lot more responsive.
Take a look at illumination too as it can make longer exposures and lower framerate.
I’m doing experiments with 4 super oldschool ps3eye cameras, I just love them. ultra responsive and very low resolution. lots of manual control so you can have steady results from different cameras.

Actually, for my performance I ended up borrowing the Logitech BRIO from work. It didn’t run at a higher fps or resolution, but still looked noticeably better than tbe c920… it’s a bit of a price jump though.

Hello @theo !

Thanks for the quick reply and sorry for taking so long to test this.

Yes, it works!
Now, I need to see if moving to MJPG is an option for my project…

@s_e_p I have tried the BRIO and even if the image quality seems/looks better, I did not get good results as I think the stream is more compressed (and most likely using MJPG).

1 Like

Thanks for the confirmation.
Going to try and figure out an easier way to pass this option through.

Be interesting to do some side by side shots with YUV and MJPG and see if there is a noticeable difference.

Also wondering if I can just add a FOURCC for H264 to videoInput?
Looks like it exists: c++ - How to control bitrate MEDIASUBTYPE_H264 directshow? - Stack Overflow

Will try adding it to videoInput and see if we can pull this into the nightly builds soon.

The issue I’m facing now is that I’m not able to run as many cameras. The CPU usage is way more since using MJPG. It reaches 100% at 9 or 10 cameras (on a i7 6700k). When using YUY2 it was max 60% with 15 cameras.

Not sure how I should be addressing that yet.

@smallfly @s_e_p I made the changes in videoInput to add H264.

Having issues getting the lib into the OF nightly builds right now but if you download the master videoInput branch and build for x64 the compileAsLib project you should be able to swap out the videoInput.h and videoInput lib in OF for the ones you compile and get the H264 feature.

Then you would do:

I’m still going to work on a way to make this easier to do from OF but just wanted to give you a chance to try it out. You might get better performance / quality than MJPG

Good timing. Def give the H264 changes a go as they might end up being hardware decoded faster than MJPG.

One other suggestion would be to do ofVideoPlayer::setUseTexture(false); if you only need the pixels. This would save the texture upload of all those HD frames which could help a lot.

However if you need the textures that doesn’t help :slight_smile:

@theo Thanks!

I just tried with H264. It definitively uses way less CPU (~20% for the 15 cameras), but the cameras are not running at 30fps. I get similar fps than when using YUY2.

@smallfly doh! :man_facepalming:

I wonder why the fps is still slower when its not using much of the cpu? Maybe because it’s transferred to the GPU? So the GPU is maxed out with the decoding, but the CPU is fairly un-impacted as a result?

Looking online I see some threads mentioning “RightLight” setting needing to be disabled in the webcam settings. I would also check things like auto-exposure etc to make sure its not set. Often having a long shutter speed can reduce the FPS. But I am guessing if you have got it working with MJPG you’ve done all of these things.

One more thing to try is disabling power management for usb at the bios and/or Windows Power Management settings.

But most likely the bottleneck is in DirectShow.

Edit: one more thing, definitely build in Release if you haven’t been.

Well I’m surprised too. I will keep looking.

The GPU is not maxed out. Even with only 1 camera open I’m not getting 30 fps video stream using H264 (build in release).

@theo could it be possible that I not using H264?

@theo to add to my previous message…

1 Like

one more link…

Oh wow - well that is a shame.
I’m glad at least this and @s_e_p’s issue spurred me to figure out some of the performance issues with videoInput.

If you find a different camera that works well would be great if you could follow up here.
I’ve been using the Kinect Azure color camera but that is definitely overkill :slight_smile:


Btw - maybe try MJPG again with setUseTexture(false); curious to see if that helps with the CPU load.

No, using texture does not help.
Even if I set only 1 of the cameras to use a texture (and to render it on screen), each time I add/open another camera (with setUseTexture(false)) the CPU usage increases by 10% .

Ah okay - I guess that all makes sense considering as it is an HD stream.

videoInput only allows RGB or BGR output pixels, so the Directshow graph is converting the pixels to that format. That is probably where some of the CPU overhead is.

You might have luck using gstreamer or another capture pipeline that can load the texture with less decoding steps, but at this point I would maybe look at other camera options.

You could try building OpenCV ( fairly easy via cmake ) with video capture support and see if their pipeline is faster. I believe they have a MediaFoundation pipeline which is a bit more modern than videoInput.

I do think 10-15x HD might be a strain for most pipelines TBH.

could YUY2 but over USB 3.0 be an option? Because the C920 is USB 2.0.

I would like to find a cheap (200$ max) photo camera option - eg. a Canon I could trigger from OF.

On thing to check is light, if the camera is on auto and not getting enough light it will hold the shutter open longer and look like a lower frame rate (and likely also output a lower frame rate). It would be worth running the test in a lot of light (like full sunlight if it is available in your location at the moment). Small sensor cameras with smaller pixels need more light, and in automode this will result in a lower frame rate (I am not sure if the camera will output 30fps when it is capturing less due to low light). The rightlight that @theo mentioned is controllable by the Logitech Capture app, as are all the other camera controls.