Abstract:
Model registration methods for single-image normally compute the model configuration based on the point clouds estimated from images, which have uneven density, and long distances from the centroids to the points. Existing methods have failed to fully consider these characteristics of the estimated point clouds, and do not consider the correlations between original object point clouds and normalized point clouds. To address these issues, we design a novel object feature extraction network and propose a model registration method based on centroid voting and correlations between point clouds. First, we apply farthest point sampling to the point clouds. Then, leveraging the directional nature of the object’s local features towards its centroid, we employ local feature regression to estimate the relative displacement vector of the sampling point concerning the centroid. Furthermore, a multi-layer perceptron with shared weights is used to explore correlations between matching points in objects and normalized point clouds. Simultaneously, a self-supervised loss function for key points is introduced to enhance the reliability of weight predictions. Experimental results on the ScanNet25k dataset demonstrate that the proposed method achieves an 8.2 percentage point(pp) improvement in task accuracy compared to the current state-of-the-art ROCA.