Three-Dimensional Local Feature Descriptor based on Dynamic Graph Convolution and PointNet
-
-
Abstract
Three-dimensional (3-D) local feature description is fundamental task in 3-D computer vision. Existing methods either rely on noise-sensitive handcrafted features, or depend on rotation-variant neural network structures. This paper proposes rotation-invariant and general local feature descriptor named DGCPNet by combining Dynamic Graph Convolution and PointNet. First, a local patch is aligned with a Local Reference Frame (LRF), and used as an input of our network. Then, local geometric features and point features are extracted by a dynamic graph convolution model and a PointNet model, respectively. This resolves the issue that a single PointNet model is unable to learn the relationships between points in the input point set. Finally, to further improve the learning ability of the network, a dual-attention mechanism layer, containing a Point Self-Attention (PSA) module and a Local Spatial-Attention (LSA) module, is proposed to integrate the local geometric features and the point features, obtaining the final descriptor features. Extensive experiments on the indoor and outdoor datasets demonstrate that DGCPNet outperforms existing methods in terms of descriptiveness, robustness, and generalization.
-
-