Abstract:
A network called RPMNet++ has been proposed to improve the accuracy of point cloud registration in complex scenes and under non-ideal sample conditions, including noise interference, density inconsistency, and structural incompleteness or damage caused by occlusion. It mainly involves two aspects: (a) Copula denoising model establishment. On the premise that the points in the neighborhood have a certain degree of similarity or consistency, point cloud features are extracted using convolutional neural networks and then used to calculate Kendall correlation coefficient () and Clayton Copula distribution function, so as to filter out negatively correlated noise points while preserving positively correlated internal points as much as possible. This model helps alleviate feature extraction bias, parameter estimation error, and misjudgment of corresponding point relationships caused by noise interference. (b) Local feature learning under bidirectional attention mechanism. By considering attention direction, the traditional local attention mechanism is clearly divided into two parts: attention from the sampling (center) point to its neighborhood point and attention from the neighborhood point to the sampling point. On this basis, the bidirectional attentions are combined under different spatial encoding method, so as to enhance the network's ability to learn local fine-grained features from sparse point clouds denoised. Experiments on ModelNet40, a public dataset, show that the proposed network has significantly improved in terms of isotropic average rotation error and translation error compared with RPM Net, and has reduced (0.026, 0.001), (0.267, 0.0019) and (0.560, 0.007) respectively under noiseless point clouds, Gaussian noise point clouds with different densities, and Gaussian noise point clouds with missing structure. Meanwhile, experiments on another public dataset, the Stanford University 3D model, demonstrate that the proposed network outperforms the seven latest network recently published, and has good generalization ability and application value.