Ofxfacetracker map expression

Hi,

Does anyone have experience with mapping expressions onto a facetracker output? For example, the face should always smile regardless of what the user is doing. I have some ideas of mapping a pre-built ‘smile’ model in the facetracker points, together with applying the rotation matrix so that it matches the face’s orientation, but I don’t know how to take into consideration the full geometry of the face.

A realtime face expression substitution between two users would also be cool. I am not looking for working code, just general guides on how to achieve this.

Thanks!

Here’s some progress on this topic: https://vimeo.com/110977646
In this demo the mouth and eyebrow points from the source face are mapped onto the neutral destination face. In the case of a smile, I also take the inner mouth mesh from the source face and map it to the destination.