融合语义-灰度特征的三维场景纹理优化算法
Texture Optimization Algorithm For 3D Scenes Fusing Semantic - Grayscale Features
-
摘要: 针对三维场景纹理映射中存在的模糊伪影问题, 本文提出了一种融合语义-灰度特征的三维场景纹理优化算法, 能够利用多角度拍摄的图像对三维场景进行高保真纹理映射. 相比现有算法, 本文算法针对存在较大相机姿态误差以及低精度重建几何的三维场景纹理映射具有明显优势. 首先, 根据图像所对应的相机姿态计算出初始图像映射关系; 然后, 融合语义特征对初始映射关系进行优化, 保证场景中几何模型间颜色的正确性; 再融合灰度特征进一步优化, 以保证几何模型内部纹理颜色的正确性; 最后, 结合三维场景信息采用加权平均策略融合像素合成纹理图像, 同时根据映射关系将纹理图像反投影到几何体, 即生成了具有高保真纹理的三维场景. 以缺少颜色信息的三维网格作为数据, 与其他纹理映射算法进行对比实验, 实验结果表明, 该算法可以生成具有清晰高保真纹理的三维场景.Abstract: In order to address the problem of blurring artifacts in 3D scene texture mapping, this paper proposes a texture optimization algorithm by fusing semantic features and grayscale features for 3D scene texture optimization, which can recover photorealistic texture maps for 3D scenes using multi-view images. Compared to the existing algorithm, this algorithm has obvious advantages for 3D scene texture mapping with large camera pose errors and low precision reconstruction geometry. We first calculate the initial images mapping relationship based on the camera pose corresponding to the image. Then, the initial mapping relationship is optimized by fusing semantic features to ensure the correct color of the geometric models in the scene, and further optimized by fusing grayscale features to ensure the correct color of the texture within the geometric models. Finally, the texture images are synthesized by fusing the pixels with the 3D scene information using a weighted averaging strategy, and the texture images are back-projected onto the geometry according to the mapping relationship, which generates a 3D scene with high-fidelity texture. We test our algorithm using 3D mesh with the lack of color information as data, compared to the existing algorithm, The experimental results show that the algorithm can generate 3D scenes with clear and high-fidelity textures.