基于高斯点云近远景分离的无界场景新视角合成算法
Unbounded Scene Novel View Synthesis Method Based on Gaussian Point Cloud Near-Distant Separation
-
摘要: 针对目前对于无界场景缺乏高效的场景表示方法, 导致进行新视角合成时远景部分合成质量低, 容易产生伪影与失真的问题, 为了对无界场景进行高效表示, 提升无界场景新视角合成的真实感, 提出一种基于高斯点云近远景分离的无界场景新视角合成算法DoubleGS. 首先提出一种基于高斯点云的场景近远景分离表示方式, 提高场景重建质量; 然后利用远景点云预训练初始化和近景点云修剪, 减少新视角合成时的伪影和瑕疵; 最后在点云泼溅算法的基础上设计并实现两阶段式可微高斯点云渲染管线, 提升高斯点云的渲染性能. 在多个包含无界场景的多视角图像数据集上进行实验的结果表明, 在保证实时渲染的前提下, DoubleGS在PSNR, SSIM和LPIPS等多项指标上优于对比算法; 在无界场景下, 平均渲染时间相比3DGS算法平均减少23%.Abstract: Due to the current lack of efficient scene representation methods for unbounded scenes, the reconstruction quality of the distant part of the scene is low, and artifacts and floaters easily produced in novel view synthesis. In order to efficiently represent unbounded scenes and improve the quality of unbounded scene novel view synthesis, we propose a two-part novel view synthesis algorithm for unbounded scenes based on Gaussian point cloud separation of near and far views, DoubleGS. First, a representation method for scene separation of near and far views based on Gaussian point clouds is proposed, which improves the quality of scene reconstruction; then uses pre-training initialization of distant point clouds and pruning of near point clouds to improve the overall reconstruction quality of the scene, reducing artifacts and floaters in novel view synthesis; and finally, based on the point cloud splatting algorithm, a two-stage differentiable Gaussian point cloud rendering pipeline is designed and implemented, which improves the rendering performance of Gaussian point cloud. Experiments conducted on several datasets containing multi-view images of unbounded scenes show that DoubleGS has excellent performance in multiple indicators such as PSNR, SSIM and LPIPS while ensuring real-time rendering. Compared with existing methods, the average rendering time is reduced by 23% compared to 3DGS in unbounded scenes.