high framerate capture to disk

I’m looking for tips on taking the 640x480 60 fps output of the PS3Eye and saving it to disk without dropping any frames from the camera.

I started by simply using ofImage::saveImage() to save each frame from the camera. This dropped about every third image.

Then I tried threading the image saving so it wasn’t happening in the camera thread (which is separate from the main update()/draw() thread in all these cases). With the saving in its own thread, I lost every fourth image.

Then I tried something a little crazier where I used FreeImage_SaveToMemory, which does the compression (in this case JPEG) but writes it to memory instead of disk. once everything is captured, it’s copied from memory to disk. Now I’m losing maybe every tenth image.

I might be able to overcome this specific case by threading the SaveToMemory code, but I feel like I must be missing something. FRAPS does full screen video at 60 fps, so there must be some kind of trick :slight_smile: I don’t think JPEG compression is the right way to go here – 640x480 takes anywhere from 10-15 ms on my computer, just barely below the 16 ms required for 60 fps.

Should I be storing uncompressed video in memory (that’s 52 MB/sec), and then intermittently saving it to disk?

Any ideas would be hugely appreciated!

PS: I discovered that the PS3Eye is actually really good about not dropping any frames (at least on Windows) – it’s more that programs like AMCap aren’t very efficient about saving video.

PPS: I tried Zach’s ofxQtVideoSaver and timed addFrame() – it’s a bit faster and drops fewer frames than JPEG compression, but still isn’t ideal.

Hmmm, if you can try it on linux, there’s glc:
http://nullkey.ath.cx/projects/glc

The latest version seems really quite fast, and it also has an API you can use to build into your app if you really want. You might also get some tips on how they are capturing so quickly by looking at the code (they use some opengl extension to grab the frame)

Interesting… thanks for that link Pierre. It looks like the trick I’m looking for is somewhere in here http://nullkey.ath.cx/git/glc/tree/src/glc/capture/gl-capture.c But I think I need a higher level overview of what’s going on in order to put it to use.

Also, I don’t think the frame capture is something I can really get around as the camera input interface is system dependent. I could create an optimized capture app for one OS, I suppose, but I don’t think the input interface is the bottleneck anyway…

It looks like at least part of the trick is using one of these libraries:

http://www.oberhumer.com/opensource/lzo/
http://www.quicklz.com/

To do basic compression while the app is capturing. They only do 50-70% compression so I still feel like I’m missing something. Without compression, you’ll get a 3.1 GB video for 1 minute of input… so we’re still talking about >1GB per minute…

glc uses an opengl extension that allows to read from the graphics card memory at the same time you’re writing to it, so while you’re drawing the next frame, you can be reading the previous one, but for your case i don’t think it’s necessary since you already have the frames in the computer memory.

perhaps you can try by saving video instead of images, each time you create an image you have the overhead of each image’s header + opening/closing each file.

with the new gstreamer utils in linux it should be super simple, something like:

  
  
ofGstUtils gst;  
  
gst.setPipelineWithSink("v4l2src ! ffmp4enc ! filesink location=filename.mp4");  

then if you want to get the buffer at the same time you’ll need an element called tee in the pipeline to be able to pass the buffer to two sinks at the same time, don’t remember the exact syntax right now, but can take a look if you’re interested.

right now the gstUtils only work for linux but it should be so simple as compiling/finding a binary for gstreamer for windows

I haven’t been able to get gstreamer working on Windows yet.

But your post gave me an idea: the best encoder in this case is probably going to be h.264, so why not just use an h.264 implementation and nothing else?

Is there a reason that Zach’s ofxQtVideoSaver can’t do h.264? I feel like I’ve seen Quicktime export h.264 from applications like Premiere.

I just took a stab at compiling ffmpeg and failed. But I got a .lib out of x264. When I try to link against the .lib and compile against the .h, I get:

  
obj\release\src\testApp.o:testApp.cpp:(.text+0x29dd)||undefined reference to `x264_encoder_open(x264_param_t*)'|  

For now I switched to preallocating raw uncompressed images. It takes a lot of space, but I can capture a legitimate 60 fps this way :wink:

not so certain h.264 would be a good idea - it’s extremely processor heavy so your bottleneck will become CPU time.

as to the linking error, it sounds like you’re missing a .lib or a .cpp file somewhere. google the name of the function ‘x264_encoder_open’ and see what comes up. also handy if you’re on Linux or OSX is the ‘nm’ command line tool, it lists function names inside libraries.

[quote author=“grimus”]Hmmm, if you can try it on linux, there’s glc:
http://nullkey.ath.cx/projects/glc
[/quote]

Brilliant tip. It works like a charm.

Thanks,
Peter

Hey Kyle,

Is there any reason you can’t just use vlc or amCap to save the video? I was also going to write a simple webcam video saver, but then I realized I could just use an existing program, I was able to capture 320x240@125 with no problems using guvcview. I’m not sure if there is a windows version, but I would give AmCap a try.

If there is a reason you need to use oF, then you could try using a threaded image saver. I’ve got a class that stores frames in a linked list and writes them out as fast as it can in a separate thread, then you can use ffmpeg to construct a video. it might get bogged down and freeze your computer if it can’t go fast enough though.

Hi Tim, AmCap would often drop frames when I tried this with the PS3Eye, which completely destroyed the application (3d scanning, where every frame is essential).

Using OF wasn’t absolutely necessary, but it was important to me to automate the capture. I could have made some call to the command line to capture a video I suppose.

The main reason I was looking at this problem (high FPS capture to disk) is because I wanted to be able to 3d scan an arbitrary amount of time. This means having a good pipeline that initially captures to RAM then offloads to disk regularly. I wasn’t sure how to interface to something like VLC or AmCap to make that happen.

Also keep in mind I was writing that post on based on my experience with an older Thinkpad. It was nice, but using a slightly newer computer might actually clear up the problem (as it seems to in your case).

Kyle,

I’m not sure how your 3d scanning works (though it’s awesome), but it seems like you would want a decent way of dealing with dropped frames. Even if the recording software is fast enough, you may have problems with the driver dropping frames before the software even sees it.

Here’s my threaded video saver, it’s pretty messy and has support for multiple video channels at once which you probably don’t need.
[attachment=0:1kzg23bv]threadedVidSaver.h.zip[/attachment:1kzg23bv]

I just tested it at 640x480@60 and on my older thinkpad T60 the frames in the queue shoots up pretty fast. I’m guessing that the reason for this is that it has to open and close each file, whereas with an actual video format you just keep appending to the same file. I don’t know enough about video formats to do anything like that though.

threadedVidSaver.h.zip

Oh I just thought of somehting, maybe the best thing to do would be combine the QtVideoSaver with a frame queue, this would keep the file I/O down a bit and should ensure no frames are lost. I don’t have oF installed on windows or Mac so I can’t try this now, but it sounds like it would work.

The 3d scanning is based on retrieving every frame from the camera, so it’s essential to not drop frames (otherwise you get a 3d glitch). I’ve tested the timestamps of when frames are returned using my own threaded video grabber:

http://code.google.com/p/structured-light/source/browse/trunk/OpenFrameworks/apps/structuredLight/capture/src/ofxThreadedVideoGrabber.h

And I’m sure the driver doesn’t drop frames. There is only an issue if I don’t poll the camera regularly enough due to backup from other processes (like file writes, as you saw).

Using a single video stream is a good way to go. Another good way is just allocating a lot of RAM in advance. The best solution would be something that is a balance of RAM and HDD allocation that swaps things out of RAM onto the HDD in large chunks now and then. I’m pretty sure this is what apps like ScreenFlow, fraps, etc. do… besides having heavily optimized encoding algorithms :slight_smile:

Hey Kyle,

I dont know if you’ve figured this out already or not, but if not, I managed to get a 60fps capture working by combining the RAM frame queue with a single video file. I basically modified my threaded video saver to open one file to dump jpeg data into. I’m using the FreeImage_SaveToMemory() function to compress the frame before writing to disk. As long as the video capture thread can keep up with incoming frames, which should be no problem, the saving thread will never miss a frame due to the queue. I had the program running at and above 60fps while the ps3cam was set to record at 60fps@640x480. my RAM based frame queue was staying small, usually between 1-20 frames in memory at a time depending on whatever else the system is doing at the moment.

The trick is actually not worrying about properly encoding video at all until the program closes. As I said I’m basically just pushing raw jpeg frames into one file, one after another. I then call ffmpeg to re-encode the video file. This takes almost no time because it just copies the video stream into an mjpeg MOV file with a proper header.