I’m about to dive into my first OF project and I would very much appreciate thoughts of more experienced OF folks on the following approach.
I have experience using Processing but OF is new to me. I’ve been looking to do something in OF for a while and it seems like the speed benefit over Processing might be useful for this project.
The basic idea is a program that takes video inputs (people in a room) and projects stylised ‘shadows’ onto the wall based on the video input. There are also a couple of effects based on motion/stillness and overlap between shadows in different sections of the screen that I’d like to incorporate, but for now I’ll just describe the basics.
Hardware setup: Mac Pro with dual video cards; 3 x projectors; IR lighting and IR sensitive webcams (likely Unibrain Fire-I). IR lighting because we want to keep the visible lighting in the space dim.
Blob tracking and processing: out of focus webcam -> threshold in OpenCV -> OpenCV detect blobs -> render shadows based on blob’s outline points. Using an out of focus video source and appropriate threshold levels gives a nice ‘stretchy alien’ look as necks/limbs become thinner.
Projection: the plan is to project onto a series of flat screens placed along a ~25ft wall at varying (but not too dramatic) angles while masking out the spaces between the screen. It looks like Theo’s work on warping is the place to start for this http://forum.openframeworks.cc/t/quad-warping-an-entire-opengl-view-solved/509/0
Any thoughts from people who know their way around this stuff?
Thank you, Patrick