Hallo we would like to to an installation with a kinect inside a shop window and showing the depth data on the window with back projection of people outside.
It seems the setup is too slow with pi3/4 in linux and python and opencv (it only uses 1 core it seems) so we would lke to try with windows and the original sdk and we will also try openframe works to get to 30 frames per second.
What small powerfull windows computer do you recommend with such a setup?
Has somebody done something like this? Or have you used raspi3/4?
In my experience you want to use kinect v2 and put the camera touching the glass and the computer you need depends on what you are going to do with the graphics, kinect v1 will not be suitable for that enviroment
And what kind of graphics did you do? have a video to see what kind of interaction we are talking about? I use rpi on work but in another projects and with an ups they are reliable but i never used them for graphic projects.
We managed to pull it off. We used 2 Kinect v1 and two 8 GB RAM with Quad-Core Intel Celeron J3455-Prozessors. They say it as 2,3 GHz in burst modus. The kinect v1 works great directly clued to the shop window. The spot is great as there it has a markise and only a very short time with direct sunlight. Now I want to try to do some nice animations. Any tipps?
The code is in c# and we use the Kinect Framework and Emgu the opencv Wrapper.
yes works as well no difference as night. Today is a cloudy day. I think only at a really nice day and sunlight directly on it we get problems. Its up until March 22. Will also put on a kinect v2 to test the difference. We just need to set the range right so people are on it completely.
Yes that could we I also have two kinect v2 in backup we just need to adapt the software a bit for the 2.0 sdk. I look at the intel nucs but it only had one HDMI so I chose these small acepc pcs but will also test the nuc for a future work.
Did you do it with openframeworks? Do you have the code somewhere?
Yes, it was done with oF and i think i used this addon. It’s pretty straight forward. You can get a Bodyindex picture of the kinect which gives you blobs of every tracked person. You can use these blobs to display the people in the way you like.
I have the code, but im not allowed to share it, im sorry. But like i said, it’s pretty straight forward. If you have any specific questions, ill gladly help though.
Also i might stop by and have a look at your installation in Munich, im nearby rather often
intel nuc has two mini display ports… so dual graphic output if you want.
Instead of using BodyIndex you can do it the same way you will do with a kinect v1 on oF, get depth, apply a max a min threshold to define the distance max and min of your interest area and voila you have your blob.
Using only depth is bullet proof for light, the worst scenario is to have a hole in the shape
I do not have a video of the front shop store install but i used the same on this one, where we have direct sunlight coming from the right. And with all the light over the sensor we still have depth info.
This is in canary island so lots of sunny days its a collaboration with another forum member Paolo
You can take a look at those examples, to see how need to prepare depth info