高级检索
闫衍芙, 吕科, 薛健, 王聪, 甘玮. 基于深度学习和表情AU参数的人脸动画方法[J]. 计算机辅助设计与图形学学报, 2019, 31(11): 1973-1980. DOI: 10.3724/SP.J.1089.2019.17682
引用本文: 闫衍芙, 吕科, 薛健, 王聪, 甘玮. 基于深度学习和表情AU参数的人脸动画方法[J]. 计算机辅助设计与图形学学报, 2019, 31(11): 1973-1980. DOI: 10.3724/SP.J.1089.2019.17682
Yan Yanfu, Lyu Ke, Xue Jian, Wang Cong, Gan Wei. Facial Animation Method Based on Deep Learning and Expression AU Parameters[J]. Journal of Computer-Aided Design & Computer Graphics, 2019, 31(11): 1973-1980. DOI: 10.3724/SP.J.1089.2019.17682
Citation: Yan Yanfu, Lyu Ke, Xue Jian, Wang Cong, Gan Wei. Facial Animation Method Based on Deep Learning and Expression AU Parameters[J]. Journal of Computer-Aided Design & Computer Graphics, 2019, 31(11): 1973-1980. DOI: 10.3724/SP.J.1089.2019.17682

基于深度学习和表情AU参数的人脸动画方法

Facial Animation Method Based on Deep Learning and Expression AU Parameters

  • 摘要: 为了利用计算机方便快捷地生成表情逼真的动漫人物,提出一种基于深度学习和表情AU参数的人脸动画生成方法.该方法定义了用于描述面部表情的24个面部运动单元参数,即表情AU参数,并利用卷积神经网络和FEAFA数据集构建和训练了相应的参数回归网络模型.在根据视频图像生成人脸动画时,首先从单目摄像头获取视频图像,采用有监督的梯度下降法对视频帧进行人脸检测,进而对得到的人脸表情图像准确地回归出表情AU参数值,将其视为三维人脸表情基系数,并结合虚拟人物相对应的24个基础三维表情形状和中立表情形状,在自然环境下基于表情融合变形模型驱动虚拟人物生成人脸动画.该方法省去了传统方法中的三维重建过程,并且考虑了运动单元参数之间的相互影响,使得生成的人脸动画的表情更加自然、细腻.此外,基于人脸图像比基于特征点回归出的表情系数更加准确.

     

    Abstract: To generate virtual characters with realistic expression more conveniently using computers, a method based on deep learning and expression AU parameters is proposed for generating facial animation. This method defines 24 facial action unit parameters, i.e. expression AU parameters, to describe facial expression;then, it constructs and trains corresponding parameter regression network model by using convolutional neural network and the FEAFA dataset. During generating facial animation from video images, video sequences are firstly obtained from ordinary monocular cameras, and faces are detected from video frames based on supervised descent method. Then, the expression AU parameters, regarded as expression blendshape coefficients, are regressed accurately from the detected facial images, which are combined with avatar’s neutral expression blendshape and the 24 corresponding blendshapes to generate the animation of the digital avatar based on a blendshape model under real world conditions. This method does not need 3 D reconstruction process in traditional methods, and by taking the relationship between different action units into consideration, the generated animation is more natural and realistic. Furthermore, the expression coefficients are more accurate based on face images rather than facial landmarks.

     

/

返回文章
返回