Interactive video projections for a dance performance

In two weeks time, we’ll do the first public performance in the project Reaktiva projektioner 1.0 (“Reactive Projections 1.0”). I started working with openframeworks early last summer, and have been slowly building up the application since then (in my spare time).

I use a FireFly MV camera (retrofitted with a filter that blocks visible light) to film the dancers, and regular stage lights with multiple colored filters (red + green + blue) in a row to light the back drop with infrared light, enabling my program to see the dancers as black contours against a light background.
We use LED-based stage lights to light the dancers- Since LEDs don’t radiate any infrared light, this ensures that the dancers stay dark in the camera image.

In the app I have a background thread that receives and processes the incoming video frames using Open CV (blob detection and optical flow). Since I have multiple cores on the computer I’m running the app on, this got me a big boost in frame rate compared to before (since the QuickTime frame decoding and the Open CV stuff can now take place in parallel with the scene drawing). Though it was a bit tricky to get the multiple threads up and running. :slight_smile:
I segment the dancers’ contours by subtracting a background image that I save in advance, and then use blurring and thresholding to get the contour image to pass to the blob detection.

In the app I have built up a system with switchable “scenes” that process the camera image and outputs video to the projector. I also have a simple sound file player (just a wrapper for ofSoundPlayer) and a loop player (also ofSoundPlayer) that can trigger multiple synced loops on top of each other and is sensitive to the movements of the dancers.

Finally, I have added a simple cueing system that can trigger scene changes, audio changes, do fades, stuff like that. It’s all handled “manually” by the operator by clicking on a button to trigger each new cue. A bit tedious, but it works.

I started out using ofxCocoa, but pretty soon I started to modify it to allow me to have multiple windows with of-content, and also nib-based Cocoa content in my app. It’s gotten a bit out of control, there are five windows that open when the app starts (not counting the main “projection” window), so I think I will have to add some window management menu commands pretty soon. :slight_smile:

I use lots of different addons, and I have been referring to these forums _a lot_ during the development. I’m really grateful to the openframeworks community, the project would never have been possibly without it!

Here is a trailer for the performance:

1 Like

Nice work!
I’m currently working on an openCV tracked interactive installation and would love to get some tips on background threading the openCV stuff. I have to admit, threads kinda scare me. :slight_smile:


Yeah, threads are scary! :slight_smile:

My background thread looks something like this:

void videoObjectTracker::threadedFunction() {  
    while (isThreadRunning()) {  
        bool gotANewFrame = readNextFrameFromCamera();  
        if (gotANewFrame) {  
            threadLocker lock(*this);  
            // do foreground segmentation  
            // do blob tracking  
            // do optical flow analysis  
            // send the results to the main thread:  
            videoTrackingResults results = getCurrentResults();  
                mutexLocker locker(fTrackingOutputQueueMutex);  
        ofSleepMillis((1000 / 60) / 4);    // Sleep for a quarter of a frame  

The videoTrackingResults class contains the “results” of the analysis of a single video frame: stuff like blobs, various images containing optical flow data and such. The results object is pushed onto a queue that is read by the main thread (from my app’s update() function), where I use the same mutex in the same way to pop any entries from the queue. If there are no entries, there is no new frame from the camera. If there are multiple entries, I only use the last (newest) one and drop the others.

There is also a bit of communication from the main thread and the background thread because I have user interface elements for controlling various parameters for the camera and OpenCV stuff. For these functions, I block the thread completely using

    threadLocker locker(*this);  
    // code that modifies the parameters here  

Since I have the same threadLocker in each pass through my threadedFunction(), the code that wants to modify the parameters will block until the current pass in the thread is done, and then the thread in turn will wait until the parameter modification is complete.

Hope that makes sense

(In reality, my app is a little bit more complex than that. That is because I actually read the frame images from the camera in another thread, and have another mutex-protected queue that is used to send the frame images from that thread to the OpenCV thread. The reason I do that is that I have yet another thread that is used to record the raw camera images to a QuickTime movie on disk, and I want to avoid dropped frames for that if at all possible, and I also want the QuickTime encoding stuff to steal as little time as possible from the rest of the app.)

Oh, and one more thing that I forgot about the background thread: all images you create or modify in the background thread must be set to not use an OpenGL texture, like this:


If you don’t do that, you will quickly get lots of crashes (or at least I did on Mac OS).

Awesome! Thanks for the tips. :slight_smile:
I’m going to attack it all this weekend.
I may have to come up with another way to stitch my cameras together though, currently I’m using an fbo to draw two camera’s frames side by side, and blob tracking across them. This won’t work in a background thread will it?

Yeah, I think you’ll get into trouble with the fbo if you start using it from a background thread, though I could be wrong, haven’t tried it myself. (I do all my image manipulations in the background thread using ofxCvImage.)