高级检索
张睿思, 潘烨. 用于角色表情动画生成的深度学习技术[J]. 计算机辅助设计与图形学学报, 2022, 34(5): 675-682. DOI: 10.3724/SP.J.1089.2022.19006
引用本文: 张睿思, 潘烨. 用于角色表情动画生成的深度学习技术[J]. 计算机辅助设计与图形学学报, 2022, 34(5): 675-682. DOI: 10.3724/SP.J.1089.2022.19006
Zhang Ruisi, Pan Ye. Stylized Avatar Animation Based on Deep Learning[J]. Journal of Computer-Aided Design & Computer Graphics, 2022, 34(5): 675-682. DOI: 10.3724/SP.J.1089.2022.19006
Citation: Zhang Ruisi, Pan Ye. Stylized Avatar Animation Based on Deep Learning[J]. Journal of Computer-Aided Design & Computer Graphics, 2022, 34(5): 675-682. DOI: 10.3724/SP.J.1089.2022.19006

用于角色表情动画生成的深度学习技术

Stylized Avatar Animation Based on Deep Learning

  • 摘要: 人脸动作捕捉不仅需要对人脸几何信息进行模拟,而且需要准确传达人脸表情.传统的人脸动作捕捉技术,如ARkit,基于人脸的几何信息对面部表情进行捕捉,但是很难让观众体验到角色表情变化.而最近的基于情绪的动作捕捉技术,如ExprGen,考虑使用人脸情绪进行面部捕捉,但很难对角色脸部细节进行刻画.为此,提出将人脸几何信息和表情结合的方法,对动画角色进行控制.首先,通过训练神经网络识别人脸和动画角色表情,对人脸和动画数据集图像进行匹配.然后,通过训练端到端神经网络,提取角色表情信息,获得动画角色骨骼参数.最后,结合人脸几何信息对脸部关键点骨骼参数进行修正.通过对不同人脸输入,生成角色表情定性分析;用4个演员视频作为输入,带动角色运动的吸引力和强度定量分析证明了方法的准确性和实时性.

     

    Abstract: Animating 3D character rigs from human faces requires both geometry features and facial expression information.However,traditional animation approaches such as ARkit failed to connect character storytelling to the audience because the character expressions are hard to recognize.However,recent emotion-based motion capture techniques,such as ExprGen,consider using facial emotion for facial capture.But it is difficult to characterize the details of the character’s face.A network is proposed to incorporate facial expressions into animation.Firstly,an emotion recognition neural network is used to match human and character datasets.Then,an end-to-end neural network is trained to extract character facial expressions and transfer rig parameters to characters.Finally,human face geometry is utilized to refine rig parameters.Qualitative analysis of the generated character expressions,and quantitative analysis of the attractiveness and intensity of the character expression have demonstrated the accuracy and real-time of the model.

     

/

返回文章
返回