ofxDlib FaceTracker: detection vs tracking

I have a version of ofxDlib that still have the FaceTracker object. Looks like the current repo does not include it anymore.

Anyway.

I was wondering if there is a way to know when Dlib detects vs tracks a face.
My experience is that the moment of detection usually takes a bit longer than tracking does. I want to be able to see if frames get dropped in the moment of detection.
For my project I need to read and analyze all the face pixels and account for continues passage of time.

Thanks for any advice.

Not related directly with ofxDlib but if you need to do face detection or traking i can strongly recommend using openvino or dnn opencv module, i have been using it a lot lately and is working very very nice and also on openvino you can use CPU or GPU acceleration on a intel machine, no need a powerfulll nvidia card like tensorflow and others

Thanks for the tip.
I am currently pretty happy with the performance of ofxDlib. It does everything I want.
Just wanted to learn a bit more about how to access some deeper functions.

Thanks again for pointing me to openvino, looks very interesting.

Hey @stephanschulz all of the most-up-to-date face tracking stuff is in the develop branch of ofxDlib. I haven’t started moving it back over to master due to lack of time, but the develop branch was working as of a couple days ago. See the readme / bootstrap command to bring it up.

@bakercp thanks.

Yes I noticed that I am actually working with the develop branch. I got confused about the difference of main and develop branch.

Anyway can you help with this question?

Thank you very much for your time.

Hm … not sure if I totally understand, but in this face tracker, the process is pretty simple:

  1. Detect all faces via bounding boxes (the detection can be configured to use the CPU-based HOG detector – the default – or the MMOD detector, which requires GPU for reasonable speeds).
  2. Then a really simple spatial analysis is performed to to make a best guess if a given bounding box detected this frame is the same as the one detected last frame. It does this by looking to see if the bounding boxes overlap or are close enough.
  3. If the tracker determines that the bounding boxes are the same face (purely based on spatial analysis / proximity) it assigns them an index.
  4. When a new index is assigned, the event appears in the onTrackBegin callback. When a known id is reassigned from a previous frame, onTrackUpdate is called. When a track is lost, onTrackEnd is called. There are a few parameters that allow the track to be lost for a few frames and it will pick up the face again if it’s in the same general location as it was.

To get to the question I think you are asking – in other face trackers (e.g. https://github.com/kylemcdonald/ofxFaceTracker), the process is:

  1. Detect a face using a haar detector.
  2. Then stop using the haar detector (it’s slow) and let the CLM tracker take over (this is fast). So, it’s a different architecture than the way it’s done in ofxDlib.

https://github.com/kylemcdonald/ofxFaceTracker2 , which also uses dlib, uses an approach like the one I use in ofxDlib (always detecting, then figuring out what bounding boxes belong to the same id).

In some future version, instead of doing tracking based on spatial characteristics alone, one could use the facecode available in ofxDlib to do simple “recognition” based tracking.

1 Like

I am still a bit confused about shapes vs tracks.

I would like to find the oldest face; the one with the lowest label number, or highest age.

How would I be able to integrate through all detections and query their age value.

        for (auto&& track: tracker.tracks())
        {
            ofLog()<<"aTrack "<<track.first<<" confidence "<< track.second.confidence <<endl;
//            track.first.getAge();
        }
//tracker.getAge(0);

Once I found the oldest face I would like to draw it’s face features. Again I would need to connect tracks and shapes somehow to identify how they relate to each other.

Thanks for any advice.