as the topic’s title suggests, I’m curious to learn how people is tracking their skeletons these days.
Let me go first: most of the time, when I need skeleton tracking, I’m still using the old NiTE library or the Kinect SDK. I imagine this is very common practice.
Sometimes, when the scenario is easy/fitting enough, I also bring back old school, pre-ML, skeletonization techniques.
For most of the real world situations I deal with, I feel OpenPose is still not the most practical solution and again: I think a lot of people feel the same way.
I really think it’s kind of weird how, in the post-kinect world, a lot of new depth cameras hit the market allowing us to re-use most of our depth sensing code without rewriting a single line of code, but there’s a thick silence when it comes to skeleton tracking. There’s some talk about hand tracking, but almost nothing when it comes to robust full figure tracking.
So here I am: anybody working on this? Any paper, or library I should check out?
Any hackish NiTE workaround?