高级检索

神经隐式表面参数化与纹理重建

Neural Implicit Surface Parameterization and Texture Reconstruction

  • 摘要: 神经隐式表面纹理映射在高保真多视角重建中至关重要, 但现有方法在映射鲁棒性与细节表达方面仍存在不足. 为此, 基于神经符号距离函数(SDF)隐式表面重建, 提出一种联合学习表面参数化与纹理映射方法. 该方法首先在掩模内对光线进行采样并计算其与隐式表面的交点, 将三维交点坐标输入参数化网络F以估计参数化坐标, 同时通过逆映射网络G提升映射的鲁棒性; 然后通过共形损失与等面积损失对参数化过程进行约束, 以保证参数化的规整性和生成纹理的平滑性; 最后利用所得参数化坐标在神经纹理图中提取纹理特征, 在不增加模型计算复杂度的情况下提升纹理重建的高频细节捕捉能力. 在DTU数据集进行新视角合成实验, 所提方法相较于主流的NGF方法, 在峰值信噪比方面平均提升约12.21%, 结构相似度方面平均提升约6.51%, 验证了该方法的有效性和优越性.

     

    Abstract: Neural implicit surface texture mapping plays a crucial role in high-fidelity multi-view reconstruction, yet existing methods still exhibit limitations in mapping robustness and detail representation. To address these issues, we propose a joint learning framework for surface parameterization and texture mapping based on neural signed distance function (SDF) implicit surface reconstruction. The proposed method first samples rays within masks and computes their intersection points with implicit surfaces. These 3D intersection coordinates are fed into a parameterization network F to estimate parameterized coordinates, while an inverse mapping network G is employed to enhance mapping robustness. Subsequently, the parameterization process is constrained by conformal loss and equiareal loss to ensure regularity in parameterization and smoothness in generated textures. Finally, the obtained parameterized coordinates are utilized to extract texture features from neural texture maps, improving high-frequency detail reconstruction capabilities without increasing computational complexity. Experiments on the DTU dataset for novel view synthesis demonstrate that our method outperforms the mainstream NGF approach, achieving average improvements of approximately 12.21% in peak signal-to-noise ratio (PSNR) and 6.51% in structural similarity index (SSIM), thereby validating the effectiveness and superiority of the proposed approach.

     

/

返回文章
返回