General techniques for Interactive Walls?

Hello, I was just enjoying @TimS ’ “Disturbance”: Disturbance - reactive audio-visual floor - #25 by TimS

If I understand his responses to questions and the concept correctly, he places the camera near the projector and looks only at the parts of the captured video that has a red spectrum (to avoid a feedback loop from the green/blue particles) and then has the particles displace whenever there is something in the image that is darker, ie. the bodies of people. Wanted to look at the code, but the file seems to no longer be available (It’s an old topic)

Just wondering if this is the most standard approach or if there are other ways/standards of creating interactive walls and AR-like experiences. I’m sure there are various methods with their advantages/disadvantages but not sure if they’re all summarized somewhere.

Hi @s_e_p - one of the standard approaches for creating interactive walls is to use depth cameras like the Kinect, Kinect V2, or more recently, the Intel Real Sense camera. OF already comes with the ofxKinect addon for this very purpose. The good thing about using a depth camera is to avoid a feedback loop of colours (as you’ve rightly pointed out can be a problem).

There are approaches without a depth camera as well though. I believe with the iPad Pro/iPhone one could use ARKit and use that somehow - beyond my level of expertise.

Another approach would be to try something like Runway which can do real time person segmentation based on a camera feed.

Hey, thanks for your reply. I do have a kinect but I don’t understand its depth/3d functionality in oF and which methods to use to control things like feedback loops. Eg., looking through the methods, I saw setDepthClipping() , but of course just changing the near and far clipping values alone won’t affect how ofxCv::ContourFinder picks up contours based on the color of pixels…

With the kinect you set a min/max depth treshold over the depth map so you define a “area” in the space where the user is captured. Later you do a normal blob detection and you have now the users to interact with whatever you want.

Play with the kinect example and will be easier to understand.

About people segmentation using a neural model i did a demo using ofxTensorFlow2

But kinect approach will be simple