Facial expressions exhibit not only facial feature motions, but also subtle changes in illumination and appearance. Since it is difficult to generate realistic facial expressions by using only geometric deformations, detailed features such as textures...
Facial expressions exhibit not only facial feature motions, but also subtle changes in illumination and appearance. Since it is difficult to generate realistic facial expressions by using only geometric deformations, detailed features such as textures should also be deformed to achieve more realistic expression. The existing methods such as the expression ratio image have drawbacks, in that detailed changes of complexion by lighting can not be generated properly. In this paper, we propose a nonlinear model for skin color change and a model-based synthesis method for facial expression that can apply realistic expression details under different lighting conditions. The proposed method is composed of the following three steps; automatic extraction of facial features using active appearance model and geometric deformation of expression using warping, generation of facial expression using a model for nonlinear skin color change, and synthesis of original face with generated expression using a blending ratio that is computed by the Euclidean distance transform. Experimental results show that the proposed method generate realistic facial expressions under various lighting conditions.