高级检索

场景目标的稀疏序列融合三维扫描重建

3D Scanning of Scene-Level Targets Based on the Sparse Sequence Fusion

  • 摘要: 针对场景扫描深度图数据量大、匹配误差累积导致重建结果漂移以及耗时高的问题, 提出一种场景级目标的稀疏序列融合三维扫描重建方法. 首先, 对深度图序列采样以筛选支撑深度图; 其次, 在支撑深度图子集上划分扫描片段, 各扫描片段内执行深度图匹配融合生成表面片段; 再次, 利用表面片段几何特征执行局部多片段间的连续迭代配准, 优化各扫描片段的相机位姿; 最后, 融合支撑深度图序列生成场景目标三维表面. 在消费级深度相机采集的深度图序列和 SceneNN 与 Stanford 3D Scene 这 2 个公开数据集上进行测试, 将稀疏序列融合与稠密序列融合方法进行比较. 实验结果表明, 该方法可将配准过程的均方根误差降低 16%~28%, 使用 8%~54%的数据量即可完成稀疏序列融合, 运行时间平均缩短 56%; 同时, 增强了扫描过程的有效性和鲁棒性, 显著地提高了扫描场景的重建质量.

     

    Abstract: 3D scanning of scene-level targets usually confronts several bottlenecks including a large amount of redundant data, feature drifting as well as time-consuming. To solve these problems, a scene-level targets reconstruction method is proposed based on the sparse sequence fusion. The first step is to construct the supporting subsets via sampling the depth image sequence. Secondly, the supporting subsets are divided into a set of successive fragments. Thirdly, to optimize the camera motion trajectory, geometric feature is introduced to the process of the continuous iterative registration between multiple fragments. Finally, fusing the supporting subsets could generate the targeted surface. The scanning tests and the comparison experiments are conducted on depth sequences captured by a consumer depth camera and two public datasets, SceneNN and Stanford 3D Scene. The results show that the proposed method can reduce the registration RMSE by 16%-28%, use only 8%-54% data, and shorten the running time by about 56%. In addition, it enhances the effectiveness as well as robustness of 3D scanning and improves the reconstructed quality significantly.

     

/

返回文章
返回