Here’s my latest piece - Bipolar. It’s an experiment in visualising sound using computer vision. I’m using Theo’s ofxKinect to build a mesh from the depth and RGB data. The face and vertex normals are calculated and every second vertex is extruded along its normal using the sound data from the microphone.
How do I embed the video here? According to this thread - http://forum.openframeworks.cc/t/new-of-forum/5917/0 - I just add a link to the Vimeo page. This doesn’t seem to work.
Nicely done! How do you get rgb and depth data so well combined? The figure itself looks very well without the usual kinect noise.
Nice piece. try using http instead of https to embed the vimeo video.
very nice - also curious about any rgb/depth syncing trickery
I edited your link with Nick’s suggestion and it seems to work
The RGB-depth calibration is now taken care of in the ofxKinect addon. Check the latest version. You just need to include the following: