My first two performance pieces with openFrameworks

Just recently started openFrameworks and I’m loving it. The tools it offers for interactivity allow you to make very interesting works with little code.

Below are a full video of my first performance with, and then an excerpt from my second performance this past weekend. Thanks to @roymacdonald and everyone else who’ve been helping to answer my questions lately .


instrument / Phrase

This is my first piece made in openFrameworks. The position of the projector and the position of the video camera the above video was taken with make it hard to tell what’s going on, but I’ll explain below:

Contours are derived from the Kinect depth image to make the sillhouette outline of Takumi-chan, the speaking performer.

The electric violin and a pedal control a sound in Pure Data (The sound gradually becomes just the straight audio pick-up of the instrument towards the end). The red bars are controlled by the fft of this sound, sent via OSC from Pure Data. (I’m to the side, playing the violin in the dark)

The performer’s voice, via pin-mic is also put into Pure Data where a little bit of effect is added. The gray bars are controlled by the fft of this sound. (It still moves a little bit when reacting to the violin sounds coming from the speakers)

The fft shapes are centered around the contours. The position of the largest contour controls the panning of the sounds in Pure Data via OSC. You can’t really hear this in this recording though because a) In this performance, Takumi-chan didn’t move much from side to side and b) the Go-Pro mic doesn’t seem to pick up stereo nuances.

As you can see, some wall contours were detected, so there are two wave forms at times. This, along with the position of the projector (to make the image bigger), were improved in the 2nd performance- which was unfortunately not recorded!


Here’s an excerpt from the performance last Saturday. I’m offscreen again here and the violin/Pure Data frequency and amplitude are used to control the the shape of the “mind”, floating around the performer’s head. I had a fixed z-position for the mind, because I didn’t get consistent depth readings from the Kinect, but now that I think about it, maybe I could have used a lpf for that as well, as suggested here for the Haar problem.

3 Likes

Looking great!
Glad to be helpful. Feel free to ping me if you need help.

Just as a suggestion. Did you try using the Microsoft Kinect body tracking? It works really well and you would get immediately the position of the center of the head. Caveat: you need to use windows, but it is totally worth it.

cheers

1 Like

Oh, perfect; I use Windows anyway. How can I use it? Googling it brings up some Azure sdk…

It depends on which kinect you are using. I think that for kinect v1 (also known as kinect 360) there was something but it was not really straight forwards to get it running. And for that there were also another options to get skeleton tracking.
For kinect v2 (also known as kinect One, (yes really confusing naming)) there was a microsoft SDK, can not remember its name but it worked really well, and you could use it with openFrameworks straight away. Check this addon There are several more addons, check what is at ofxaddons.com

The kinect azure is the newest one, came out a few years ago. It is really nice, but it is kinda hard to find. Hope this helps

1 Like