高级检索
叶可扬, 潘曲利, 任重. 基于毛发点云深度剥离的神经渲染与时域稳定性改进方法[J]. 计算机辅助设计与图形学学报, 2023, 35(5): 676-684. DOI: 10.3724/SP.J.1089.2023.19419
引用本文: 叶可扬, 潘曲利, 任重. 基于毛发点云深度剥离的神经渲染与时域稳定性改进方法[J]. 计算机辅助设计与图形学学报, 2023, 35(5): 676-684. DOI: 10.3724/SP.J.1089.2023.19419
Ye Keyang, Pan Quli, Ren Zhong. Neural Point Cloud Rendering via Depth Peeling Multi-Projection and Temporary Refine[J]. Journal of Computer-Aided Design & Computer Graphics, 2023, 35(5): 676-684. DOI: 10.3724/SP.J.1089.2023.19419
Citation: Ye Keyang, Pan Quli, Ren Zhong. Neural Point Cloud Rendering via Depth Peeling Multi-Projection and Temporary Refine[J]. Journal of Computer-Aided Design & Computer Graphics, 2023, 35(5): 676-684. DOI: 10.3724/SP.J.1089.2023.19419

基于毛发点云深度剥离的神经渲染与时域稳定性改进方法

Neural Point Cloud Rendering via Depth Peeling Multi-Projection and Temporary Refine

  • 摘要: 针对现有基于点云的神经渲染网络无法渲染具有时域稳定性的高质量毛发的问题,提出基于毛发点云深度剥离的神经渲染与时域稳定性改进方法.该方法通过对输入点云模型的分层投影,获取不同分层的特征信息;将结果进行融合,以适应毛发半透明特性;将训练好的结果输入到时域稳定性增强网络中,该模块利用相邻帧间点云的重投影得到当前帧和前几帧的依赖关系,生成当前帧的最终结果,从而保证了训练结果的时域稳定性.使用光线追踪生成的高质量毛发数据集进行实验,结果表明,与现有方法相比,该方法可以获得更好的时域稳定性和渲染结果.

     

    Abstract: To tackle the problem that the existing point cloud-based neural rendering network cannot render high-quality hair with temporal stability, a depth peeling and temporal refine network is presented. Depth peeling method projects point clouds in different layers; fuses the results to adapt to the translucency of the hair; input the trained results into the temporal refine network. This module uses the reprojection of the point cloud between adjacent frames to obtain the dependency relationship between the current frame and the previous frames, and generates the final result of the current frame with temporal stability. The experiment uses high-quality hair datasets generated by ray tracing, and the final results show that compared with the existing methods, the proposed method can obtain better temporal stability and rendering results.

     

/

返回文章
返回