The last few months I’ve been working on an open source toolkit written in openFrameworks that lets you make movies with Kinect. It has a process for mapping the data from an external SLR onto the depth data. It’s also has a Kinect data recorder and compression system that is useful even by itself.
We’ve been teaching workshops and making videos with it, and are having a lot of fun! I’ve gotten a lot of help from the OF community in building the app, and have had a lot of help from the studio for creative inquiry at CMU… but there is a ton of work to be done and it’s starting to be a bit more than I can handle by myself, so I’m putting feelers out to see if there is anyone out there interested who’d like to join in.
I’m looking for collaborators to help get it working better on Windows, and make a Linux port. Also looking for developer’s who want to help with making the tool integrate the data workflow into things like Houdini and Cinema 4d.
I’d like to get it to a stable point in the next month and make some nice tutorials, then do another round of workshops and projects.
We’ve also been fielding a few commercial/music video requests that we’ve been turning down due to lack of time. So a bigger team could really help take these on!
Here is the source code for the app:
along with a collection of addons you can get from running clone_addons.sh
Here are some recent videos’ we’ve made using it:
https://vimeo.com/41232022 – workshop in barcelona
https://vimeo.com/40058384 – overview, workshop from Resonate festival in Belgrade
And a few soon to be released!..
Send me a DM or reply to this thread if you are interested, or have any questions. Hope you want to get involved!
I’m also working with it and have modified it a bit so to make it fit my ideas, yet a lot of code could be shared.
I have been pulling the data out and render/modify it in C4D. So far I’d been exporting an .obj sequence and importing it into C4D, which works just fine for me. Are you looking for an integration of this kind or something more sophisticated like a plugin for C4D?
I also have a Blackmagic Intensity Extreme with which I can capture raw data from the DSLR. I want to integrate it the capture with it to RGBDToolkit. So far I only have the intention.
Have you tried to use the canon’s EDSDK to capture directly into the computer via USB instead of recording to the SD card? I haven’t checked if it is technically possible, but I’m curious about it.
Also I’m going to integrate several kinects at the same time.
Still I have no videos to share just a lot of tests. I hope to share something soon.
Let me know what you think.
I sent you a PM, but when I check my outbox, it’s not there… Anyway, I am interested in participating. I have the rig and built the software. OS X.
@roy – I’m really really curious to see your workflow! This is exactly how I was imagining it. Are you exporting the texture maps as well so the SLR mapping works? A plug-in would be amazing, but may be unnecessary if the obj system is good enough.
@eight – did we meet at the WiredFrames show? Didn’t get the PM -
Anyway, glad to have you guys. I really think before we move any further we should get the current stable release on github working on Windows and Linux.
I just made a little todo list and put it in the repo
I’ll send around a little email to talk about how we may get started and split up some of the tasks.
@obviousjim – yes, we did. In PM I wrote that I am about to test the whole workflow, however, am waiting for the kinect coming back from the MS warranty repair.
I haven’t done the SLR mapping-texture link. Yet I think it wont be much trouble (everytime I say so it ends being much more trouble than I expected ;P) I’ll send you some code experiments during the weekend cause now I’m really busy working.
BTW, I’m building a 3D printed (makerbot) rig for the kinect and the DSLR. I haven’t finished printing it yet. I’ll share the files once it works.
I can try yo build and test on windows next week.
We can be interested in help develop the project. My partner in miami is testing the app to doa music video, for a Local band, and we plan to build the tool by the end on june here in madrid.
I’ll keep you posted on our progress.
What about applying some shaders on the fly using our dream app by making some couples for ofxComposer??
I’m arriving to NY the 28th of July! Until then my life it’s pretty chaotic moving/closing things here on Buenos Aires.
See you then!
We have a group of programmers at the Media Lab working on realtime segmentation that might improve some of the registration between the depth image and the RGB images. I’ll post results once we get our code working correctly. Looks like an exciting direction.