Abstract
Robust and precise face tracking under unconstrained imaging conditions is still a challenging task. Recently, the Constrained Local Model (CLM) framework has proven to be very powerful to track frontal and near frontal facial movements. In this paper, we introduce a Pose-Adaptive CLM which is able to accurately track large 3D head rotations. This model relies on two main parts: (1) an adaptive 3D Point Distribution Model that ensures consistency between a tracked point in the image and the corresponding point in the shape model and (2) an adaptive appearance model that deals with appearance variation of a point under different viewing angle. We present comparative experimental results highlighting the improvement in both robustness and accuracy of our method. We also introduce a new challenging dataset with accurate head pose annotation.