Blendshape, a linear three-dimensional model (3D) of facial expressions, has become a standard way of generating a 3D facial shape in computer graphic, animation, and game industries. We introduce a novel method to acquire the coefficients of the expr...
Blendshape, a linear three-dimensional model (3D) of facial expressions, has become a standard way of generating a 3D facial shape in computer graphic, animation, and game industries. We introduce a novel method to acquire the coefficients of the expression blendshapes to represent the target facial shape, called selective expression representation (SEA). Previously, the facial shape has been obtained merely by minimizing the distance to spare or sampled target points. This causes the facial shape to be composed of the blendshapes that are redudant to each other. As the redundancies decrease the semantic meanings of the blendshape expressions, the facial shape might fail to represent facial expression from the perspective of the human being. Thus, SEA focuses on preserving the semantics of the blendshapes to represent the facial shape accurately. Under the assumption that each delta blendshape is a facial movement that has a semantic meaning, SEA finds a series of facial motions needed to compose the target facial shape. By introducing a metric to quantify the directional similarity of facial motions between the target and blendshape, SEA sequentially finds a sufficient number of expressions having closest analogy to the facial motion of the target. It is demonstrated that less-correlated expressions that increase the similarity to the target can be obtained non-parametrically by the proposed selection method. Our experiments show that for sampled facial points, fitting the facial shape with the less-correlated expressions better predicts unobserved facial points. It is verified that since each expression can be represented in a manner which produces less-interference with the others, the set of selective models can represent the target with each semantic meaning of the expression preserved, showing the improvement in the facial representation over the baseline methods and the state-of-art methods.