Just recently started openFrameworks and I’m loving it. The tools it offers for interactivity allow you to make very interesting works with little code.
Below are a full video of my first performance with, and then an excerpt from my second performance this past weekend. Thanks to @roymacdonald and everyone else who’ve been helping to answer my questions lately .
This is my first piece made in openFrameworks. The position of the projector and the position of the video camera the above video was taken with make it hard to tell what’s going on, but I’ll explain below:
Contours are derived from the Kinect depth image to make the sillhouette outline of Takumi-chan, the speaking performer.
The electric violin and a pedal control a sound in Pure Data (The sound gradually becomes just the straight audio pick-up of the instrument towards the end). The red bars are controlled by the fft of this sound, sent via OSC from Pure Data. (I’m to the side, playing the violin in the dark)
The performer’s voice, via pin-mic is also put into Pure Data where a little bit of effect is added. The gray bars are controlled by the fft of this sound. (It still moves a little bit when reacting to the violin sounds coming from the speakers)
The fft shapes are centered around the contours. The position of the largest contour controls the panning of the sounds in Pure Data via OSC. You can’t really hear this in this recording though because a) In this performance, Takumi-chan didn’t move much from side to side and b) the Go-Pro mic doesn’t seem to pick up stereo nuances.
As you can see, some wall contours were detected, so there are two wave forms at times. This, along with the position of the projector (to make the image bigger), were improved in the 2nd performance- which was unfortunately not recorded!
Here’s an excerpt from the performance last Saturday. I’m offscreen again here and the violin/Pure Data frequency and amplitude are used to control the the shape of the “mind”, floating around the performer’s head. I had a fixed z-position for the mind, because I didn’t get consistent depth readings from the Kinect, but now that I think about it, maybe I could have used a lpf for that as well, as suggested here for the Haar problem.