ofxKinect + Audio input

Hi there,

I recently setup a kinect with openFrameworks using ofxKinect. So far I haven’t coded anything in OF, I have little experience in processing and xCode.

I would really like to duplicate the effect shown in this video:

specifically the visualization of the point cloud in evenly spaced lines which react to audio input. Not so much the flying bubbles that are introduced later in the video.

Anyone have tips, links to documentation, or pre made examples to help me achieve this?


Take a look at ofxKinect example and AudioInput examples. One approach would be to simply skip every # rows in the Point Cloud and draw an ofPolyLine / ofLine between each point in that row.

Then ! To make it audio reactive you could use audio data.
The audioInputExample will give you something basic like overall levels, but if you want to get crazy you can do something like ofxFFT , which analyzes sounds into “bins” so you can have different triggers for things like bass , mids , etc.

Once you have that data you can effect the Y values in each row so it’s not a flat line.

Does that make sense ?

I’ve also been working on a small open source project you might be interested in https://github.com/benMcChesney/ofxOpenVJ the goal is to use a kinect and sound analysis to display and render “scenes” , but it can be a bit complicated to setup, so I’d start with trying to make it yourself in a testApp and see how far you get !

Best of luck !!! and welcome to OF