高级检索
范懿文, 夏时洪. 支持表情细节的语音驱动人脸动画[J]. 计算机辅助设计与图形学学报, 2013, 25(6): 890-899.
引用本文: 范懿文, 夏时洪. 支持表情细节的语音驱动人脸动画[J]. 计算机辅助设计与图形学学报, 2013, 25(6): 890-899.
Fan Yiwen, Xia Shihong. Towards Expressively Speech-Driven Facial Animation[J]. Journal of Computer-Aided Design & Computer Graphics, 2013, 25(6): 890-899.
Citation: Fan Yiwen, Xia Shihong. Towards Expressively Speech-Driven Facial Animation[J]. Journal of Computer-Aided Design & Computer Graphics, 2013, 25(6): 890-899.

支持表情细节的语音驱动人脸动画

Towards Expressively Speech-Driven Facial Animation

  • 摘要: 针对语音驱动人脸动画中如何生成随语音运动自然呈现的眨眼、抬眉等表情细节以增强虚拟环境的沉浸感的问题,提出一种可以合成表情细节的语音驱动人脸动画方法.该方法分为训练与合成2个阶段.在训练阶段,首先对富有表情的三维人脸语音运动捕获数据特征进行重采样处理,降低训练数据量以提升训练效率,然后运用隐马尔可夫模型(HMM)学习表情人脸语音运动和同步语音的关系,同时统计经过训练的HMM在训练数据集上的合成余量;在合成阶段,首先使用经过训练的HMM从新语音特征中推断与之匹配的表情人脸动画,在此基础上,根据训练阶段计算的合成余量增加表情细节.实验结果表明,文中方法比已有方法计算效率高,合成的表情细节通过了用户评价验证.

     

    Abstract: In order to synthesize facial expression details such as eye blinking and eyebrow-lifting,this paper presents a new speech-driven facial animation method.This method contains two phases,training and synthesizing phases.During the training phase,3D facial animation data is firstly re-sampled to improve training efficiency.Then a hidden Markov model(HMM) is utilized to study the correlation between the expressive facial animation features and the synchronized speeches.At the same time,statistical data of the synthetic residuals is collected from the trained HMM.During the synthesizing phase,firstly,the trained HMM is used to estimate the matching expressive facial animation from the novel input speech features.Secondly,based on the estimated animation,expression details are synthesized using the collected residuals.Numerical and user study experiments show that this method outperforms conventional approaches both in the efficiency and animation quality.

     

/

返回文章
返回