How can I tell whether a person is showing her back to Kinect or not?
I am using openNI for OF and ICST_ambisonics_2.2 for Max.
I think when someone completely turns her back, left and right shoulder joints points displace.
I need that data for moving sound sources to the opposite direction.
I calculated the angles between shoulders. It works until I show my back.
Is there any way to fool Kinect for that kind of information?
Not very helpful, but i’m having the same problem with the new kinectV2 and the MS SDK v2.
Placing another kinect on the top/ceiling might be the solution but not very practical one I guess.
Hi, no it wont recognize someones back. I don’t know if the MS SDK can provide such info.
Yet, you’ll begin to get a very bad skeleton tracking as the subject is looking perpendicular to the camera. If you want to get consistent data and let the user turn around freely I would place more than one camera. Calibrate them all together and correlate their data.
The other option is a a kinect looking downward on top of the subject, but you’ll have to write your own computer vision procedures in order to determine the rotation of the subject.
You could add more logic to your app : if suddenly left and right are swapped, it probably means the person changed facing orientaion. On top of that you could use the face tracking (though somewhat unreliable sometimes) to determine if the person is facing the kinect or showing its back.
@silverbahamut I wouldn’t rely on the swapping. First, limbs are labeled right or left according to the how you are seeing them, so when rotating the right will be the one on the right no matter if the person is facing front or backwards. Second, the tracking gets really unreliable as you get closer to be prependicular to the camera, so by sure the limbs will “jump” and probably swap.
I think that the face detection approach is much more reliable.
Yet I think that the most reliable one by far would be to use a kinect on top facing down.
Thanks everyone, I don’t think only changing Kinect’s position will be able to work on my project. If I place Kİnect on top of me it won’t do skeleton tracking. Because it will see me as a line. Or thats what I think. So maybe I can use another kinect.
I don’t think I will use facing orientation because I don’t need to know when the joint positions are swapped. I just need continuous rotating data information. (like 360 degrees) So that I can use that OSC data for changing the sound sources position (in virtual reality) just the opposite direction.
For example, when I am facing kinect sounds should stand in front of me. If I step closer, sound sources should do the same (come closer to the user). and so on. In every case the sounds should follow the user. Everything is fine by now. If the user/listener/… turns his back (of course they won’t see the kinect during performance) the sound sources should change its position in order to follow the user. But they don’t do that. Instead they stand in front of me as if I am looking at them but I am not. Sound sources acts like that because, Kİnect send OSC data to Max as if I am looking at the camera. But I am not. So my project fails at that point.
of course, putting the kinect on top wont do the skeleton tracking, that´s why I said before that for it to work you´ll need to write your own computer vision algorithms to determine the users rotation. Even though this wont tell you where the user is facing, you´ll be able to get some very smooth and precise rotation data.
you might want to try this:
also check this for the computer vision algorithm. you could use something very similar to analize and track the image from a top mounted kinect:
I will definitely check these links.