Texture Optimization Algorithm for 3D Scenes Fusing Semantic-Grayscale Features
-
Graphical Abstract
-
Abstract
To tackle the issue of blurry artifacts in texture mapping for 3D reconstruction, this paper proposes a texture optimization algorithm by fusing semantic features and grayscale features for 3D scene texture optimization, which can recover photorealistic texture map for 3D scenes using multi-view images. This paper first calculates the initial images mapping relationship based on the camera pose corresponding to the image. Then, the initial mapping relationship is optimized by fusing semantic features to ensure the correct color of the geometric models in the scene, and further optimized by fusing grayscale features to ensure the correct color of the texture within the geometric models. Finally, the texture maps are synthesized by fusing the pixels with the 3D scene information using a weighted averaging strategy, and the texture maps are back-projected onto the geometry according to the mapping relationship, which generates 3D scenes with high-fidelity texture. The experimental results on Fountain and BundleFusion (Office0, Office2) public datasets show that compared with existing texture optimization algorithms, this algorithm achieves an average structural similarity index measure of 0.89 between the synthesized texture maps and the ground truth, and an average peak signal to noise ratio of 39.10 dB. The texture optimization speed on the two datasets are 62 s and 106 s, respectively, with an average speed increase of 67.2%. This algorithm can achieve a good balance between visual quality and optimization speed, and it still has a significant advantage in the presence of large camera pose errors and low-precision reconstructed geometry.
-
-