Currently, I’m using ofxFaceTracker expression classification example to classify different expressions, and it works very well. However, I have some questions:
As far as I know from the code, this method use the 3D distance (or norm) between current object points and the pre-defined model points to classify expressions. However, some different expressions maybe have the same distance to the pre-definded model points, even though these are belongs to different categories. Then, how can we solve this problem?
In this example, we can save our current expression object points as the pre-defined model. Then we can calculate the distance to classify expressions. However, if we take some samples, which belong to the same classification category, at different distance (or angle, rotation). Then, the classification output maybe different. So, how should we solve this problem?
BTW, can someone give me a little hint about how to calculate the 3D object locations from 2D image locations on this face tracker library. Thank you very much in advance.