Reducing Video Lag

I have two live video streams heading off to an Oculus Rift, but the way I’m doing it is generating some pretty serious lag (about half a second), and too slow a framerate (about 25fps right now, I need as close to 60 as possible). I’m on OSX 10.10, running oF 0.8.4, using two Firewire cameras for the time being (will be USB 3.0 in the future).

Here’s my video pipeline per eye:

In setup():


    videoCamWidth = 1280;
    videoCamHeight = 1024;
    riftEyeWidth = 960;
    riftEyeHeight = 1080;

    leftEyeVideo.initGrabber(videoCamWidth, videoCamHeight);
    leftEyeImage.allocate(riftEyeWidth, riftEyeHeight, OF_IMAGE_COLOR);

In update():

    bool bNewFrameLeft = false;
    bNewFrameLeft = leftEyeVideo.isFrameNew();

    if (bNewFrameLeft) {
        leftEyePixels = leftEyeVideo.getPixelsRef();
        // cropping to adjust aspect ratio — if at all possible, avoid this. It's killing the framerate.
        // Input: (Wi, Hi); Output: (Wo, Ho);
        // (Wo / Ho) > (Wi / Hi) ? crop(Wi * Ho / Hi, Ho) : crop(Wo, Hi * Wo / Wi);
        (riftEyeWidth / riftEyeHeight) > (videoCamWidth / videoCamHeight) ? leftEyePixels.crop(0, 0, videoCamWidth * riftEyeHeight / videoCamHeight, riftEyeWidth) : leftEyePixels.crop(0, 0, riftEyeWidth, videoCamHeight * riftEyeWidth / videoCamWidth);

        leftEyePixels.resize(riftEyeWidth, riftEyeHeight);


In draw():

    leftEyeImage.draw(0, 0);

So essentially: grab video, put in ofPixels object, crop to fix aspect ratio, rotate, scale, put in ofImage object, draw.

What would be the best way to optimize this pipeline, keeping in mind I may need to use a shader on the videos as well?

I would guess the crop, rotate90 and resize are heavy to be happening on a per-frame (or per bNewFrameLeft basis). They are reallocating memory, shifting pixels around, etc. I’d look at FBOs (basically a system for “drawing” into a texture and use opengl operations to rotate and resize. For cropping, you can just use some of the extended drawing routines for pixels based objects (ofImage, ofVideoGrabber) call drawSubsection(), where instead of draw() to specify the output position or rectangle, you can also specify the input rectangle.

you might have good luck using a profiler – on OSX Xcode go to product -> profile and hit “timer profile”. after letting it run for a little while hit stop on the profiler. I usually click “invert call tree”, “hide missing symbols” and “hide system libraries”. then you can figure out what functions take up the largest percentage of your running time and which are heaviest. It’s helpful to know where to focus on optimization

also you might check that your compile target is release and not debug, you can sometimes see a modest speed improvement moving to release.

1 Like

If your up to it threading the functions might help though if im not mistaken openGl doesn’t work if its not on the main thread. The ofThread example in the documentation uses a video grabber as an example so at least you have something to work with

I’ll admit nothing has made me want to give up more than debugging multithreaded programs, maybe heap corruptions…, but anything thats intensive and slowing down your main thread, threading can be your best friend. I think you can use openCV to do all the adjustments to the image without openGL which can be your work around. Dusting off this ancient topic

1 Like

I think Zach’s approach would be the easiest. Using opengl instead of pixel operations would really speed up the process.

Allocate two fbos in your setup (in the correct OR output resolution per eye), and set the anchor point in your input video streams to 50% (so they rotate around the center later on).

then in update do something like

ofTranslate(fboLeft.getWidht()/2, fboLeft.getHeight()/2);
leftEyeVideo.draw(0,0, riftEyeWidth , riftEyeHeight ); //does the scaling & AR
//same for right 

in draw, draw your fboLeft & fboRight to the occulos rift (as you would the ofImage in your code).

Good luck!

1 Like


You might get an added speed bump by using Hap codec, which decompresses your video frames on the GPU.

The installer codec installer is available here:

1 Like

@Zach — You’re exactly right with Crop, Resize, and Rotate being heavy. ofTexture::loadData is too. That profiler tool is awesome, thank you. drawSubsection() isn’t a member of the ofVideoGrabber class, how might I access it without having to loadData into a new ofImage?

@Kj1 — Thanks for your instructions; I’m now running at 60fps. I was doing something similar before, using ofPushMatrix and ofPopMatrix. I assume this Fbo method is faster, though?

One final question:

Right now, I’m using cameras with 648x488 resolution. Those run fantastically albeit pixellated; 60FPS and no lag. When I init my videoGrabbers with higher resolutions, in preparation for better cameras, it introduces some serious lag. As a test, when I change the cameras to my monitor’s onboard webcam, and use it’s native resolution as the init resolution (which is close to the new cameras), that lag is eliminated. Do you think the lag is coming from grabbing at non-native resolutions, or will I be in trouble when the new cameras arrive?

That looks interesting; unfortunately I can’t guarantee that this program will be run from the same computer every time, and I’d rather avoid having to install new codecs on every machine.

to use drawSubsection, you could use videoGrabber.getTextureReference().drawSubSection(…)
it appears not to be implemented in all classes.

your second question is a bit harder: depends alot on the system, cam specs, etc. I’m guessing you’ll have to try and experiment; feel free to report your findings back! :slight_smile:

Out of curiosity, what camera are you using to pull in 60fps?

Also I don’t know if this is feasible or will help (or why this is true), but I’ve noticed that my Oculus runs way better / less lag on Windows than on OSX. I’m running parallel operating systems on the same hardware (a Mac desktop computer).

Right now, I’m using some Firewire Point Grey Flea3s, which work great at their native resolution. We’ve ordered some USB3.0 uEye cameras for the next iteration.

I might end up having to switch over to Windows at some point, but I’ve been holding off as long as possible.