I’m actually in the first phase of development of an interactive installation that would have an entire urban square as it’s location. I would like to being able to track people (singles and groups) and therefore modify an image projected on the floor of the square with the data related to the presence of people walking over it.
The dimensions of the installation would be of around 25 x 15m, and we would need to place the projectors and needed cams on a building in the same square.
We have some ideas about how facing this tracking module of the project. We have considered using thermal cameras, IR cameras. Could anyone give us an advice of which method could be more achievable/accurate?
Thanks a lot in advance for your help.
Hi, you can achieve this with both IR and non IR cameras. Thermals are super expensive and I think will not give you any improvements regarding the tracking.
Depth sensing cameras can be an option if you want fancier or more accurate tracking. Kinects might work too if these are placed within its depth range. You could go for an Intel Realsense, which are really nice too, but more expensive than kinects.
In any case I would think of placing the cameras looking straight down, hopefully one camera per projector. You can use a raspberry pi with its camera placed with each projector, run on the Pi an openCV app and send via osc its results into the computer that processes and creates the visuals. The raspi might even be capable of handling graphics too, thus making each projector/camera module independent. take a look at ofxCv, it has a nice running background implementation for removing the background and then use its blob tracker to track people.
hope this helps
Hi, Roy, thanks for the response, very appreciated.
The idea of the individual rpis is simply brilliant. Even that, I guess I could have problems with the background substraction issue (that I assume that will be mandatory for the blob tracking), because I will not be able of controlling all light circumstances in public space.
The event would be at evening/night but even that I’m not sure of having an invariable IR spectrum coming from the interaction area.
I was thinking about using depth sensors but the distance range would be of around 25m so that’s a real problem.
Said that, do you think that a proper background substraction could be done in order to work with rpis?
Thanks a lot for the advices and help.
Hi, well if the cameras are going to be at 25 m (thats a lot) and you still want to use depth maybe a stereo camera setup can give you some decent results. You’ll need to try.
As for the background substraction, it can be tricky to get it properly and clean the whole time, although you can update it through out the event. You’ll need to put some IR lamps to illuminate the scene. I would first try find out if it is possible to put the cameras closer so you just cover the needed area and not more, which will make it easier to illuminate with IR. Then you’ll have to test, which suits best your needs, but I would bet that IR is the way to go.
I am not sure, but I think that there might be some machine leearning algorithm that can help on getting a better result than regular CV background substraction. Take a look at http://runwayml.com/ @genekogan might be able to give you an answer about this.
25m is definitely too much for a kinect, i don’t know of any long range depthcams but it’s been a few years since i looked for them. i think background subtraction with IR is probably the best bet. as far as runway goes, you could maybe try to train YOLO on overhead people but it sounds to me like you are too far up to get large bounding boxes and overhead view is weak input data compared to front-facing. i think go with the simpler strategy first before you spend a ton of time on training something.
Thanks both, I will study how runway could help me getting a more stable scenario than simple background subtraction.
Btw, I was looking for long-distance depth cameras and, although expensive, it seems that a Zed camera with a particular software setup can achieve distances of 40m.
With a Zed camera you will need a Nvidia GPU card, and is a stereo rgb camera so at night without light i think it will not work
Thanks, Pandereto. You are true, a Zed camera wont work well in a dark environments as it is not an IR sensor.