高级检索
于志平, 迟静, 叶亚男, 代福芸. 细节特征保持的三维面部表情迁移方法[J]. 计算机辅助设计与图形学学报, 2021, 33(2): 186-198. DOI: 10.3724/SP.J.1089.2021.18298
引用本文: 于志平, 迟静, 叶亚男, 代福芸. 细节特征保持的三维面部表情迁移方法[J]. 计算机辅助设计与图形学学报, 2021, 33(2): 186-198. DOI: 10.3724/SP.J.1089.2021.18298
Yu Zhiping, Chi Jing, Ye Yanan, Dai Fuyun. Detailed Features-Preserving 3D Facial Expression Transfer[J]. Journal of Computer-Aided Design & Computer Graphics, 2021, 33(2): 186-198. DOI: 10.3724/SP.J.1089.2021.18298
Citation: Yu Zhiping, Chi Jing, Ye Yanan, Dai Fuyun. Detailed Features-Preserving 3D Facial Expression Transfer[J]. Journal of Computer-Aided Design & Computer Graphics, 2021, 33(2): 186-198. DOI: 10.3724/SP.J.1089.2021.18298

细节特征保持的三维面部表情迁移方法

Detailed Features-Preserving 3D Facial Expression Transfer

  • 摘要: 在三维面部表情迁移中,针对保持目标模型丰富的细节信息以使生成的新表情真实自然,以及减少表情迁移的学习训练时间这2个热点问题,提出一种细节特征保持的三维面部表情迁移方法.首先提取三维面部模型的细节特征,获得滤掉细节后的基本表情;然后利用改进的有参无监督回归方法将源模型的基本表情传递给目标模型;最后利用提出的细节特征向量调整策略对具有源基本表情的目标模型进行细节恢复.在Windows 10系统的Matlab中,以重建精度和训练时间为评价指标,对COMA等三维面部数据集进行视觉对比和定量分析实验.结果表明,与非线性联合学习方法相比,该方法在将源模型的表情无损迁移到目标模型的同时,很好地保持了目标模型自身的个性细节特征,使生成的表情真实自然;有效地提高了面部表情迁移的训练速度.

     

    Abstract: In the 3D facial expression transfer field,aiming at the two hot problems of preserving the rich detailed information of the target model to make the generated new expressions realistic and natural,and reducing the training time,this paper presents a new detailed features-preserving 3D facial expression transfer method.Firstly,the detailed features are extracted from 3D face models to obtain the basic expression models without details.Then,the basic expression of source model is transferred to the target model with the improved parametric dimensionality reduction by unsupervised regression.Finally,the detailed features of the target model are restored by using the proposed detailed feature vector adjustment strategy.The visual contrast and quantitative analysis experiments with reconstruction accuracy and training time as the evaluation indexes are conducted on the 3D facial datasets such as COMA in the Matlab software under the Windows 10 environment.The results illustrate that compared with the nonlinear co-learning method,the method can not only transfer the expression of the source model to the target model without losses,but also well preserve the personalized detailed features of the target model,so it can make the generated expressions more realistic and natural.The method also effectively improves the training speed in facial expression transfer.

     

/

返回文章
返回