高级检索

投影视图指导的点云形状无监督保细节补全网络

An Unsupervised Detail-Preserving Point Cloud Completion Network Guided by Projection Views

  • 摘要: 传统点云形状修补中通常需要以完整的点云数据作为先验进行有监督学习, 导致点云修复补全网络的泛化性和鲁棒性不理想; 而经无监督学习策略生成的点云修复结果容易偏离输入的点云形状本身, 导致其难以恢复原始形状的精细细节结构. 基于生成式对抗网络框架, 借助待修复形状的三视角投影图像特征信息, 提出一种投影视图指导的三维点云形状无监督保细节补全网络. 该网络包含点云形状修补分支和点云形状投影图像修复分支. 首先网络通过点云形状修复分支对采样的高斯噪声使用树形图卷积结构的点云生成器进行修复生成, 以恢复模型的整体形状从而得到粗修复点云, 并使用 DGCNN 提取该粗修复点云的特征; 然后网络通过点云形状投影图像修复分支对输入模型进行三视角投影得到缺失点云的投影视图, 其用于保留输入模型的细节结构; 其次网络使用基于循环一致性的图像生成器对这些投影视图进行修复并使用 ResNet-18 网络提取这些完整投影视图的特征, 同时将得到的投影视图特征和提取的点云特征计算特征距离损失; 最后网络将该损失加入判别器中以判断生成点云的真假, 同时反馈并优化生成器, 使生成器能够学习到输入点云的整体结构和细节信息. 针对 ShapeNet 数据集进行网络训练, 并使用KITTI 和 ModelNet40 数据集分别进行实验, 结果表明, 与已有的无监督补全网络修复结果相比, 所提网络的平均 CD误差降低 11.0%~41.0%, 平均 F1-Score 提升 0.8%~14.0%, 能够有效地修复点云形状结构并恢复其形状细节, 且对不同程度数据缺失或含噪声的点云修复具有鲁棒性, 该网络具有较好的泛化性.

     

    Abstract: Traditional supervised point cloud completion methods always require the complete point cloud data as a prior information, which lead to their poor generalization and low robustness. Meanwhile, the completion results generated by the existing unsupervised learning approaches often deviate from the input shapes, making it difficult to recover the fine details of the original shapes. Owing to the framework of generative adversarial network (GAN), an unsupervised detail-preserving point cloud completion network is proposed which is guided by the feature information of three projection views obtained from the underlying shape. The proposed network consists of the branch of point cloud shape repairing and the branch of projection image completion. Firstly, the point cloud shape repairing branch employs a tree-structured graph convolutional generator to create a coarse completion point cloud, aiming to recover its overall shape. The coarse completion result is then fed into a DGCNN network to extract its features. Secondly, the projection image completion branch projects the input model to obtain three projection views of the input point cloud. Next, an image generator based on cycle consistency is adopted to repair these projection images, and a ResNet-18 network is employed to extract features from the complete projection views. The feature distance between these aligned image features and the shape features extracted from the generated point cloud can thus be calculated, and also be added to the loss function of the discriminator which is employed to judge the truth or falsehood of the generated shape. Finally, network parameters of the point cloud generator can be optimized to learn the global shape and fine details. The proposed completion network is trained and tested on ShapeNet dataset for shape completion task and also validated on the KITTI and ModelNet40 datasets. Compared with the existing unsupervised completion networks, the average CD error of our proposed network is reduced by 11.0% to 41.0%, and the average F1-Score is improved by 0.8% to 14.0%, which can demonstrate its effectiveness for repairing shape structure of the input point cloud data and also recovering its shape details. In addition, our point completion network is robust to different degrees of data incompleteness or model noise, and also shows its generalization performance on unseen objects.

     

/

返回文章
返回