Facial Animation Method Based on Deep Learning and Expression AU Parameters
-
Graphical Abstract
-
Abstract
To generate virtual characters with realistic expression more conveniently using computers, a method based on deep learning and expression AU parameters is proposed for generating facial animation. This method defines 24 facial action unit parameters, i.e. expression AU parameters, to describe facial expression;then, it constructs and trains corresponding parameter regression network model by using convolutional neural network and the FEAFA dataset. During generating facial animation from video images, video sequences are firstly obtained from ordinary monocular cameras, and faces are detected from video frames based on supervised descent method. Then, the expression AU parameters, regarded as expression blendshape coefficients, are regressed accurately from the detected facial images, which are combined with avatar’s neutral expression blendshape and the 24 corresponding blendshapes to generate the animation of the digital avatar based on a blendshape model under real world conditions. This method does not need 3 D reconstruction process in traditional methods, and by taking the relationship between different action units into consideration, the generated animation is more natural and realistic. Furthermore, the expression coefficients are more accurate based on face images rather than facial landmarks.
-
-