Recording OpenFrameworks Programs

Hi all. Im wondering about recording Open frameworks programs using a second machine and Im wondering how best to go about it? I know some people use Camtasia or the Mac OSX equivalent but they just don’t seem to be smooth or fast enough. I’ve tried Snow Leopards full screen record but that doesnt seem to be good enough either. Im maxing out my machine and it seems to be causing problems sadly.

I found this link http://www.udart.dk/2009/07/30/the-5-video-mixer-how-to-do-a-dv-screen-cast-through-firewire/ that seems to suggest another machine can record from a firewire stream. I tried this with my Windows box and surprisingly, the windows machine picked up my mac as a firewire video device. So far so good.

Sadly, Quartz composer Live DV is simply not playing. It cant play the v002 plugin as it is “untrusted” and even these that are trusted just fail with an error :frowning:

Im wondering about another program like VDMX or something that may be able to play out across the firewire. Has anyone any thoughts? What do people use for their yummy vimeo clips?

one easy way is to use ofImage’s grabScreen() function: http://www.openframeworks.cc/documentation?detail=ofImage#ofImage-grabScreen and then save the resulting images to files numbered like screengrab001.png, screengrab002.png, … then collect up all the files into a single video stream later on using ffmpeg or something.

Or you can use ofxQtVideoSaver to record a QuickTime movie (http://forum.openframeworks.cc/t/recording-quicktime-with-sound-in-sync/775/0).

I’d have thought the grabImage would be too CPU intensive. I have very little CPU to spare but I shall try them both. Im gutted that the Firewire method isn’t working and all because of that damn LiveDV program (and Apple’s annoying “trusted” policy with Quartz components!)

Thanks guys. Managed to get the movie player with the image screen grab working and tbh, it’s not that bad at all, though there is a slight hit and the quality is down but it’s the best yet. Im just annoyed that I got so close with the firewire record, and then to be pipped by the annoying Apple! >< Clearly, I should go back to using Linux! :slight_smile:

Recently I participated on a projet in Strasbourg where a single clarinetist (Adam Starkie) played Steve Reich’s “New York Counters” for 10 clarinets. The piece is circa 11 minutes long. 9 tracks were recorded on video and one was played live by Adam, where the 9 videos were projected on a screen above him, mixed with pictures of New York by a video artist, Svetlana Abracheva. The other sountracks were played thru 2 loadspeakers.

I loaded the 9 videos with openFrameworks in 3 rows of 3 rectangles and searched to save the mix in another video. I tried to use opencv, knowing that it has a video saving option, but unfortunaly opencv/highgui is not a part of openFrameworks.

I tried to load different versions of opencv with all libs but I get conflicts with the openframeworks’s opencv. Maybe somebody has an ideo how to use highgui with openFrameworks ?

Finaly, because of time limits I decided to save each frame separately in video/pause mode using grabscreen and saving each image in a separate file. And then I used the program “virtualdub” (available for windows) to transform the image sequence into an avi video file (DV/uncompressed) that was then processed by Svetlana to mix with the background images and the sound. All this worked correctely, no frame was droped, just it took too much time to save about 7000 images (about 26 gigabytes), with a saving speed of 2 images per second!

Maybe it would have been easier to render to an offscreen texture and then use opencv to save the video to avi. Is this possible ?

I had also another problem:
I loaded the 9 videos into openframeworks by asking the of video player to have the width and height of the videos divided by 3, but on the resulting video I got aliasing artefacts on the keys of the clarinet.
The problem was solved by resizing the videos with a video editing program before loading them into openFrameworks. Somebody has an idea why ?

Best,
Bernard.

I want also precise that with the method I used I had absolutely no loss of quality, because I discoverd that the video player of OF works not only with quicktime files, but also with uncompressed avi files (tested on Mac OS 10.5, windows XP and windows Vista)
The extension seems not to have a meaning, it works with uncompressed quicktime or avi .
It works also with mp4 compression but the quality is worse of course.

I wrote a threaded image saver for the purpose of recording video from a running app with minimal performance hit to the app. It works by adding frames to be saved to a threadsafe linked list queue which are then read and saved to disk by a separate thread so as to cut down on I/O wait time in the main thread. It works pretty and even has support for multiple channels of video to be saved. The only problem is if you are saving high res or very high fps video then the queue can get very large and potentially grind the system to a halt with memory thrashing. It works well for the application I needed though.

I can clean it up and post here if people are interested. Maybe we can come up with better way of doing it, I know a lot of people want this feature.

Hi Tim,

I’m very interested to see your code for this because I’m trying to save 1920x1080 content, and even when using the ThreadImageSaver class I get a lot of dropped frames, in fact its totally unusable, eg: running at 30fps I get about 30 images saved per minute! This is on an i7 2.66 with 6gb ram.

I’ve also tried using ffmpeg to save the screen but while this is much better its still not good enough.

Essentially my app is a slideshow app so I already have the images etc. I’m just using OF for placement, panning etc… because of this I’m toying with the idea of writing xml that can be imported it into Sony Vegas, however it would be much easier to just record the output directly in OF.

The issue is basically that I’m using ofxTween which relies on time rather than framerate, so every dropped frame is pretty important. I’m considering extending ofxTween so that it uses frame numbers rather than time, at least in terms of advancing the tween.

Well I have extended ofxTween so I can have the tween based upon the frame number however I need a clean way to get the target frame rate that is set using ofSetFrameRate.

This is because I don’t want to change the api to ofxTweenFrame in that it still accepts milliseconds as input duration and delay and then converts this based upon the frame rate, that is, the target frame rate, not the actual frame rate which changes (dramatically if your saving out images of each frame).

So, I need a clean way to get the target frame rate, of course I could hack this together but if I want to commit this back to my fork of ofxTween in github so that others can use this, then I think it would be more useful if I could do it in a nice clean way.

Ideally what I want is an ofGetTargetFrameRate() whose return value is set when ofSetFrameRate() is called.

Is this something that others would find useful and thus should make its way into the OF core or should I simply just hack ofxTweenFrame so that users have to pass in the targetFrameRate.

The goal I had with ofxTweenFrame was that when a user wanted to switch from ofxTween to ofxTweenFrame so they could capture image output per frame all they needed to do was modify the definition of their ofxTween variables and change them to ofxTweenFrame and change the include from “ofxTween.h” to “ofxTweenFrame.h”.