What windows PC for installation with kinect v1/v2?

Hallo we would like to to an installation with a kinect inside a shop window and showing the depth data on the window with back projection of people outside.

It seems the setup is too slow with pi3/4 in linux and python and opencv (it only uses 1 core it seems) so we would lke to try with windows and the original sdk and we will also try openframe works to get to 30 frames per second.

What small powerfull windows computer do you recommend with such a setup?

Has somebody done something like this? Or have you used raspi3/4?

I develop with kinect v2 on a intel nuc i5 and using elliotwoods addon https://github.com/elliotwoods/ofxKinectForWindows2.

In my experience you want to use kinect v2 and put the camera touching the glass and the computer you need depends on what you are going to do with the graphics, kinect v1 will not be suitable for that enviroment

Ok that seems quitte expensive with 275 € and only one HDMI

I ordered this 6GB thing now to test.

why do you say kinect v1 wont be suitable?

because of external light, the sun and the glass light reflected.

Yes intel nuc is expensive but they are good computers, some people here ocasionally mention zotac brand but i never owned one.

You can see the minimun requirements
Your computer must have the following minimum capabilities:

  • 32-bit (x86) or 64-bit (x64) processor.
  • Dual-core 2.66-GHz or faster processor.
  • Dedicated USB 2.0 bus.
  • 2 GB RAM.
  • A Microsoft Kinect for Windows sensor.

ok this pc i bought does not work it only got 2,33 and is single Core.

I am testing this now. There are also Quad Core for only 169

Have yout tried with a raspberry Pi?

The 3B+ has 1,4GHz Quad Core and 1 GB RAM and the 4 has 1.5GHz Quad Core and up to 4 GB RAM.

Nice, let me know how does it performs.

No, never tried on a rpi. But you can not install windows so you can not use microsoft sdk and i do not know how reliable is the libfreenect version on kinect v2.

We did it with a pi3/4 at a previous installation and it work great but we only had a small xwindow. Also I think the compilation was only with one core.

And what kind of graphics did you do? have a video to see what kind of interaction we are talking about? I use rpi on work but in another projects and with an ups they are reliable but i never used them for graphic projects.

There where no graphics the idea was that we would track two people/blobs and if they would hug a street light would change the color.

We managed to pull it off. We used 2 Kinect v1 and two 8 GB RAM with Quad-Core Intel Celeron J3455-Prozessors. They say it as 2,3 GHz in burst modus. The kinect v1 works great directly clued to the shop window. The spot is great as there it has a markise and only a very short time with direct sunlight. Now I want to try to do some nice animations. Any tipps?

The code is in c# and we use the Kinect Framework and Emgu the opencv Wrapper.

We also want to try it with openframworks now.

Nice, did you test it on daylight?

yes works as well no difference as night. Today is a cloudy day. I think only at a really nice day and sunlight directly on it we get problems. Its up until March 22. Will also put on a kinect v2 to test the difference. We just need to set the range right so people are on it completely.

cool, maybe it works as is cloudy and did not receive direct light. Did you get good framerate with those computers?

yes that could be. This is maybe different in summer. Yes great framerate.

Had basically the same setup a couple of years back. Youre really gonna have problems with sunlight and a kinect v1. Kinect v2 works like a charm in most situations though.

Also, generally id recommend Intel Nuc’s for windows installations.

Yes that could we I also have two kinect v2 in backup we just need to adapt the software a bit for the 2.0 sdk. I look at the intel nucs but it only had one HDMI so I chose these small acepc pcs but will also test the nuc for a future work.

Did you do it with openframeworks? Do you have the code somewhere?

Yes, it was done with oF and i think i used this addon. It’s pretty straight forward. You can get a Bodyindex picture of the kinect which gives you blobs of every tracked person. You can use these blobs to display the people in the way you like.

I have the code, but im not allowed to share it, im sorry. But like i said, it’s pretty straight forward. If you have any specific questions, ill gladly help though.

Also i might stop by and have a look at your installation in Munich, im nearby rather often :slight_smile:

1 Like

ok sure thanks. Its up until Monday March 22nd and its in the window of Kautbullinger. Its across the Apple Store in a sidestreet of Marienplatz.

https://g.page/KAUT-BULLINGER-Schreibwaren?share

intel nuc has two mini display ports… so dual graphic output if you want.

Instead of using BodyIndex you can do it the same way you will do with a kinect v1 on oF, get depth, apply a max a min threshold to define the distance max and min of your interest area and voila you have your blob.

Using only depth is bullet proof for light, the worst scenario is to have a hole in the shape

I do not have a video of the front shop store install but i used the same on this one, where we have direct sunlight coming from the right. And with all the light over the sensor we still have depth info.

This is in canary island so lots of sunny days :slight_smile: its a collaboration with another forum member Paolo

You can take a look at those examples, to see how need to prepare depth info

What intel nuc do you mean with two mini display ports I can only find the more expensive 600 range ones and they have 1 HDMI and one USB-C.

That animations look impressive! We are only at step one. Yes we also did it with min/max range depth but still playing with it and getting wholes in bodies still.