高级检索
钟易澄, 裴玉茹, 李培鑫. 基于跨视角一致约束的单视角三维人脸重建[J]. 计算机辅助设计与图形学学报, 2024, 36(4): 543-551. DOI: 10.3724/SP.J.1089.2024.19772
引用本文: 钟易澄, 裴玉茹, 李培鑫. 基于跨视角一致约束的单视角三维人脸重建[J]. 计算机辅助设计与图形学学报, 2024, 36(4): 543-551. DOI: 10.3724/SP.J.1089.2024.19772
Zhong Yicheng, Pei Yuru, Li Peixin. Single-View 3D Face Reconstruction via Cross-View Consistency Constraints[J]. Journal of Computer-Aided Design & Computer Graphics, 2024, 36(4): 543-551. DOI: 10.3724/SP.J.1089.2024.19772
Citation: Zhong Yicheng, Pei Yuru, Li Peixin. Single-View 3D Face Reconstruction via Cross-View Consistency Constraints[J]. Journal of Computer-Aided Design & Computer Graphics, 2024, 36(4): 543-551. DOI: 10.3724/SP.J.1089.2024.19772

基于跨视角一致约束的单视角三维人脸重建

Single-View 3D Face Reconstruction via Cross-View Consistency Constraints

  • 摘要: 基于深度神经网络的无监督单视角三维人脸重建已取得显著成功,其依赖光度渲染以及对称正则化从二维单视角图像进行训练,但是单视角图像由于自遮挡与光照影响缺乏可信的人脸几何与纹理约束.因此,提出了一种基于跨视角一致约束的两阶段的单视角三维人脸重建框架.首先,局部网络并行地估计多个视角的局部人脸纹理与UV位置图,利用低维统计人脸模型3DMM对自遮挡造成的缺失区域几何与纹理进行填充;在第2阶段中,补全网络对各视角的局部纹理与UV位置图进行补全并改进,重建具有细节的完整三维人脸几何与纹理.设计了关于光度渲染、人脸纹理、与UV位置图的跨视角一致约束函数,以无监督学习机制从多视角人脸图像数据优化端到端模型.实验结果表明,所提方法可有效地从单视角图像估计人脸姿态,对遮挡区域中人脸几何与纹理合理补全,重建带有几何与纹理细节的高质量三维人脸.特别地,在MICC Florence数据集上,所提方法较对比算法重建人脸的均方根误差降低了6.36%.

     

    Abstract: Deep neural network-based unsupervised single-view 3D face reconstruction has achieved remarkable success. Existing work relied on the photometric rendering constraint and the symmetric regularization to learn from 2D single-view facial images. However, the single view facial images lack reliable face geometric and texture constraints due to self-occlusion and illumination variations. In this paper, we propose a two-stage single-view 3D face reconstruction framework by virtue of cross-view consistent constraints. First, the part network (PartNet) with parallel branches is used to estimate the view-dependent pixel-wise UV positional and albedo maps. The missing geometries and textures due to self-occlusion are filled by the low-dimensional statistical facial 3DMM model. Second, the complete network (CompNet) is used to refine the UV positional and albedo maps with geometry and texture details. We design a cross-view consistency constraint in terms of photometric rendering, facial texture, and UV positional maps. The proposed end-to-end model is optimized from the multi-view facial image datasets in an unsupervised manner. Experiments show that the proposed method is effective in accurately aligning faces and inferring reliable facial geometries and textures in self-occlusion regions from a single-view image. Our method is feasible to reconstruct high-fidelity 3D faces with geometry and texture details. Specifically, the proposed method reduces the root mean square error by 6.36% compared with the state-of-the-art on MICC Florence dataset.

     

/

返回文章
返回