Best way to create a video buffer


For a small project I am doing I want to save portions of video into one or more buffers. Then later I wish to be able to replay them. I’m not sure how to proceed: is the ofFbo class the right one? But how would I indicate the amount of frames I wanna store? And how to cycle through the stored frames to play them back? Or is it equally efficient to just store frames in many arrays of unsigned chars?

Recording Video and Looping It

i did this addon some time ago:

which allows to work with video buffers over time both in ram and/or textures


Also ofxVideoUtils


Hey Arturo,

this post is now quite old, but now for a new idea I am looking further into ofxPlayModes.

What I wish for: to record into a buffer, then with a delay - which is variable in real time - display an altered version of this video feed. now as the movement in the recorded image increases, the speed of playback should also increase, until a maximum, and only until the playback arrives at a certain point before “now”, at which point playback should delay again. also, when there is little or no movement, I want playback to slow down and eventually come to a stop (and recording also).
the buffer will continuously be added to at the end, and played frames can be removed.

would such a flexible and variable approach to recording and displaying video be possible with the addon? judging from the youtube foutage I am made to think it is, but I find it hard to tell from the examples how I could begin to work on this (also cause some of them seem not to work on my system).
Also I wonder if ofxPlayModes allows for a buffer that is continuously written to and read from in the way I am looking for.

Any ideas appreciated!

Thanks, Menno


i don’t mantain this addon anymore, the most recent version should be this:

but yes it should support something like that, mainly you can create video headers that go through a buffer and report the time and the last frame they’ve read, you can change the speed of a header bsaed on whatever you want. so you would need to create a buffer, plug a video source to it and create a videoheader that goes through the buffer


thanks! i’ll download the latest version and look into creation of use of video headers.


Trying to figure out how to get going using ofxPlaymodes, I encounter some trouble. Maybe it’s just that I am not seeing something very obvious, however, here it goes.

I am using the version from github as originally uploaded by aruroc, cause the latest one by eloimaduell has no examples and is a lot harder to build cause of integration of some code from Pd-extended (at least I did not get it to build yet; usually anyhow I do my interfacing with Pd via OSC). Now I am trying get a simple delay working, based on the delay example, but I do not get a delay, also not with the original example. Video is displaying fine, but it is exactly as if I was just using the grabber class directly. Below is the relevant code. Any ideas on what I am missing? Or could it be something is wrong with my setup?

void testApp::setup(){
    grabber.setDeviceID(1); // ps3 eye cam
    buffer.setup(grabber, 400, true);

void testApp::update(){

void testApp::draw(){

void testApp::keyPressed(int key){
    if(key==' '){

void testApp::mouseMoved(int x, int y ){
    float pct = 250 * float(x)/float(ofGetWidth());


On another note, I am curious as to what happens internally with the buffers in ofxPlaymodes. Looking at the source I cannot make that out so well cause I am quite new or unfamiliar to C++ templates, vectors, etc. (Maybe I should do some reading up on that now that I wanna work with buffers like this).

Is there a benefit to using ofxPlaymodes over creating a large buffer of unsigned chars? I.e. the code below stores 3 minutes of video, and building upon this I could also create a class with methods that allow random access to any frame, and reading back video from the buffer with variable speed. It this inefficient, or is it essentially the same as for example ofxPlaymodes?

The code below is very basic and only will store black and white (or grayscale) images. Using color would multiply memory usage, which would indeed pose problems, as would creating multiple buffers of this size. After the buffer fills up this program uses 3236 MB of memory… I guess it would be more efficient to dynamically change buffer size as needed (a small delay does not ask for a large buffer), and then maybe do some calculation if the program won’t exceed available memory… Is that something (or one of the things) ofxPlaymodes does?

Code in testApp.h:

typedef struct buf{
    unsigned char * buffer;
    char hasframe;
} vidbuf;

vidbuf * buffer;
int buffersize;
int bufferpos;
int readpos;

float delay;

Code in testApp.c:

void testApp::setup(){
    camWidth = 640;
    camHeight = 480;
    totalPixels = 640 * 480;
    threshold = 150;

    buffersize = 180 * 60; // seconds * fps
    bufferpos = 0;
    delay = 30; // delay in number of frames
    buffer = new vidbuf [buffersize];
    for(int i = 0; i < buffersize; ++i)
        buffer[i].buffer = new unsigned char[totalPixels];

    tex.allocate(camWidth, camHeight, GL_LUMINANCE);

    cam.initGrabber(camWidth, camHeight);


void testApp::update(){
        // createHist() -> loads a black&white image into the buffer
        createHist(cam.getPixels(), buffer[bufferpos].buffer);
        buffer[bufferpos].hasframe = true;
        readpos = (bufferpos - int(delay)) < 0 ? bufferpos - int(delay) + buffersize : bufferpos - int(delay);
            tex.loadData(buffer[readpos].buffer, camWidth, camHeight, GL_LUMINANCE);
            tex.loadData(buffer[bufferpos].buffer, camWidth, camHeight, GL_LUMINANCE);
        if(bufferpos >= buffersize) bufferpos = 0;

void testApp::draw(){    
    tex.draw(20, 20, camWidth, camHeight);


Hi @mennowo !

I can’t remember how was my ofxPlaymodes code exactly, it was done under OF073 and not sure if this gives some problem to port it to OF0.8 … ¿ And it was not 100% finished due to some limitations of Pd side.

Anyway you could take out the Pd relation as in our original project, Playmodes was an audiovisual sampler … so you was able to manipulate a source or video and audio in sync.

If you’re just interested on video i guess ofxPlaymodes from Arturo should work … even it’s not an easy architecture, it’s very powerfull … If you’re very interested i could try to generate a simple example using my ofxPlaymodes (without audio) with a delay …

Of course you can code your own buffer as a unsigned char huge vector, and it will work … But if you want to go deeper then a simple delay then maybe using ofxPlaymodes will help you a lot as it’s very structured (and once you understand how it works) it’s very powerfull … For example you can change dinamically the size of the array.

Let me know if you’re still interested on using ofxPlaymodes and i can try to generate a simple example with delay …


Hi @eloi,

thanks for your reply. I am mainly interested in Video, cause if there will be audio I will probably synthesize it using Pd, based on information I derive from simple video analysis (though experimenting with live sound could be fun).
Mainly I wish to research having a variable delay in video, so one could move in front of the camera and after some time interact with/react to oneself.
Also being able to have multiple buffers would be nice, so I can redeploy video samples from earlier.
I think ultimately I would need to create some class using vectors that dynamically follow the needs of the program, and if that is what ofxPlaymodes can already do maybe it doesn’t make so much sense to recreate it.
Also maybe cause it uses gl and things like Fbo, maybe it is much more efficient than a blunt approach using RAM…

In terms of video implementation, is there a big difference between your ofxPlaymodes version and the original one by arturo? I can compile most examples (multix gave errors) of the version of arturo, but as described I got no delay for some reason…
I tried compiling your version, using the delay example from arturo as a starting point (on Ubuntu studio 13.10 x86_64). It seems there is a difference in the ofTexture implementation between OF073 and OF0.9, cause the following line from ofFBO.cpp gives errors, reporting undefined members:

glTexSubImage2D(texture.getTextureData().textureTarget, 0, 0, 0, texture.getWidth(), texture.getHeight(), texture.getTextureData().glType, texture.getTextureData().pixelType, 0);

I get:

../../../addons/ofxPlaymodes_new/src/ofPBO/ofPBO.cpp|84|error: ‘class ofTextureData’ has no member named ‘glType’
../../../addons/ofxPlaymodes_new/src/ofPBO/ofPBO.cpp|84|error: ‘class ofTextureData’ has no member named ‘pixelTye’

I’m not sure what to feed glTexSubImage2D instead… The ofTexture class in OF0.8 has no candidates that make the program run. It compiles but crashes with a segmentation fault pointing to ofTexture.


Hey @mennowo I am having the exact same problem as you trying to compile @eloi example. I can get arturo’s to wokr but the ofPBO code is giving me the same error. Would be good to get this going as im extremely interested in synching the audio side as well. @arturo … would you have any idea what going on with this error in the ofPBO class? cheers.


i’ve uploaded the latest version i have here:


Thanks very much Arturo!!


Hey @joshuabatty,
meanwhile I have implemented my own version of a variable video buffer, which has less features than ofxPlaymodes, and does not have audio.
I am unsure as to the effeciency of my implementation, it uses a < vector > of instances of a frame class, which allocates memory in the form of arrars of unsigned chars to store frames (which I think is not so unlike ofxPlayModes); so far I get good results, and it suits my needs; RAM usage is ok as long as the amount of video delay remains somewhat limited (under a minute is surely ok, about 3.3 GB of RAM would be used for color at 640x480).
One problem I still run into, is the situation in which playback speed is below 1, and recording of frames is happing into a buffer position which lies behind the playback position. In this case, at the moment recording should happen into the same frame number in the buffer as where video playback is at, the class sets playback speed to equal 1 (quick fix). What should happen is insertion (using vector::insert) of the remaining frames after the playback position into the beginning of the buffer, using the copy constructor, and reset play position to 0; thus recording could continue normally, overwriting the now copied frames, until the end of the buffer is reached, at which point the class will decide wether to return to the beginning of the buffer, or increase buffer size. I have not yet been able to implement this in a way that works.
If you are interested I can share some code; it is not so ready for github yet (and I don’t have much experience with github and no time now to commit code).
Bye, menno


hi all !

I’m sorry but right now i’m in china and i can’t acces my ofxPlaymodes code to check and help with the errors.

Just to let you understand … Arturo made the whole architecture but the audio part got not solved at all. We tried to implement with pure OF audio functions but got so complex that we decided to merge the audio part with a Pd patch through ofxPd, where basically Pd and OF are sharing the audio buffer. For our approach was much easier and powerfull this solution as Pd has many more audio tools and gadgets.

Our goal was (still it is and it will be once we get some spare time for this) to achieve an audiovisual sampling engine, basically we’re interested on getting into the Granular Synthesis but from an audiovisual perspective, to be able to make our audiovisual live set.

We got really close with the ofxPd but we couldn’t control all the buffer possibilities that you get when doing real-time sampling and playing at the same time. As @mennowo mentioned, there are many possible situations where complexity arise in memory management, audio and video continuity, buffer header control and so on.

I’m quite sure we got a quite solid solution for just video sampling ,but when trying to deal also with audio it gets more difficult. For example accessing a random position of a recorded buffer in video it’s as trivial as jumping to that frame in memory, but if you do the same in audio then there is an “audible” click because of the discontinuity of the waveform. So some sort of “declick” process needs to be done and there is where it starts to become more complex. Changing speed in video it’s also trivial, but in audio you might be interested in pitch-shift or in time-stretching … much more complex.

Once i can put my hands on my last code i will come back and check how is it and how i can help. I would love to do all the audio and video processing in OF, but i really need help on the audio part as i’m a bit lost on raw audio processing with OF …

Hope to be back soon with more …


Hi @eloi just letting you know that i’ve been keenly working on this for the past few weeks. It looks like we have similar goals. I am currently finishing off my PhD on audiovisual granular synthesis, hehe. I had developed a method but I really like the real time ring buffer technique that PlayModes has developed so have decided to integrate what I already have with PlayModes.

In terms of the whole audio side of things, I agree that using supercollider, reaktor, Max or PD isn’t ideal and havn’t wanted to go down that path. Instead I have just worked out how to use the ofxMaxim library to solve this problem. This means all the jumping around will avoid the clicking that was going on previously + the granular methods sound very juicy… but I guess most importantly, the whole audio visual sampling and granular action can be %100 oF! :smiley:

Here is a link to the source if you want to have a look ->

And here is a quick video of it working ->

Currently I have this technique and playModes %90 in sync but I am having a few problems getting %100 sync all of the time. This has to do with how PlayModes captures frames and sticks them on the end of the VideoBuffer as opposed to the current position of the header. I will try and post a greater description of this in another post soon but at the moment I have hacked the newVideoFrame method in the VideoBuffer.cpp file as below. This overcomes this kind of frame insertion functionality and allows Maxi and Playmodes to sync (some of the times) but I dont really like the sloppiness of it so far. Regardless, I think this could work out to be a nice solution.

 void VideoBuffer::newVideoFrame(VideoFrame & frame){
int64_t time = frame.getTimestamp().epochMicroseconds();
if(microsOneSec==-1) microsOneSec=time;
int64_t diff = time-microsOneSec;
	realFps = double(framesOneSec*1000000.)/double(diff);
	framesOneSec = 0;
	microsOneSec = time-(diff-1000000);

if (size() >= maxSize) {
    frames[framePos] = frame;
else if (size() < maxSize) {



void VideoBuffer::iterFramePos() {
if (framePos > maxSize) {
    framePos = 0;



I’ve made a ofxPlaymodes-like of my own
Many people seem interested if fast frame access for looping or delay effects.

I’m also using a RAM-buffer of pixels, but it’s quite limited.
I’d like to record 1.3 Mpixels images at 60 fps during few minutes. It would require a LOT of RAM !

I’ve been looking at video codecs designed for realtime, like HAP, now supported in AVFoundation. I wonder how it’s possible to record a video and at the same time accessing past frames in the video being recorded ?
HAP is using a compression algorithm called Snappy that is super-efficient in term of speed. Maybe using this compression before saving each frames would be a step for saving RAM ?

Last thing, instead of using a RAM buffer, what do you think about an image sequence saved to disk in real time (considering I have a SSD disk) ?


Hey @talaron,

this is a while back for me now. I also wrote an own variable video buffer plugin at that time. It is a bit messy, but I could share the code some time soon if desired. You could try and see what happens if you use it with a 1.3 MP video stream…

The plugin I wrote is aimed at 640x480 video frames, either in RGB or grayscale. I think for a few minutes of high(er)-res video, using the disk would not be a bad idea; it will take time to implement, but you will be much more flexible as regards usage of video, size of the buffer, etc. Or you could buy a lot of ram? :smile:
Actually, for 1 minute of uncompressed RGB video at 1280x1024 @ 60 fps, you would only need about 14 gigs of RAM. So if 30 fps is ok, and you get 32 or 64 gigs, you would be fine… It could work, although it would cost some money, and would be sort of a brute force approach…

As for codecs, I don’t know about HAP, but do know that usually in an encoded video stream using high compression, not every frame is stored as a full frame; it works with keyframes and in between key frames only stores changes. If you need direct access to arbitrary frames, this is problematic. But maybe it will work if you want to store/retrieve small fragments of video, and then for your purposes it will be ok?


With HAP, each frame is key frame, they are all fully stored.
I’m using 1280x1024 mono video @ 60 fps
But I may have more control with images sequence, saved on hard disk, compressed with Snappy to deal with less data


Dear Arturo,

I just discovered your work with playmodes _ Since I would like to port something similar from Max to OF - it woudl be great if I could use your addon PofXplaymodes _ But I always get errors in Xcode Compiler - maybe you could contact me also by mail - - I also speak spanish and a little bit of catalan :wink:

best wishes und thanks in advance _