Feature Preserving Mesh Reconstruction from a Single Image
-
Graphical Abstract
-
Abstract
The reconstruction of 3D objects from image with the problem of failing to maintain sharp features of objects. Based on deep neural network, an effective feature preserving 3D mesh generation method is proposed for a single input image in this paper. Firstly, image features are extracted using VGG-16 for the input images, and the image edge detection layer is specially designed to obtain the sharp features. Secondly, the vertices of the mesh (initially ellipsoid) are projected onto the feature map and edge detection map to obtain the local features of the vertices, and judge whether they are sharp feature points. Thirdly, the local features and positions of the vertices are concatenated and input into the improved graph convolution neural network (GCNN). For the non-sharp feature points, the ordinary GCNN is used, and for the detected sharp feature points, the 0-neighborhood graph convolution neural network (0N-GCNN) is used to avoid being over-smoothed by the neighboring vertices as much as possible. The output of GCNN predicts the new position and features of the vertices. Finally, the vertices and features of the mesh are up sampled by Loop subdivision. After going through above deformation process (2D feature projection, sharp feature detection, deformation by GCNN, upsampling) three times, the initial ellipsoid is finally transformed into the shape in the input image. The experiments are implemented on ShapeNet dataset based on PyTorch framework. The proposed method is compared with the existing methods quantitatively and qualitatively. The experimental results show that this method is superior to most existing methods in both Chamfer distance and F-score, and the mean values of Chamfer distance and F-score( 22τ ) are the best. Visual comparison also shows that this method effectively improves the feature preservation performance.
-
-