高级检索

多尺度特征融合的透明物体深度图像快速修复方法

Fast Repair Method of Transparent Object Depth Image Based on Multi-Scale Fusion

  • 摘要: 透明物体是日常生活中常见的事物,具有独特的视觉特性,这些特性使得标准的视觉3D传感器较难对其进行准确的深度估计.在大多数情况下,视觉3D传感器捕获的深度信息表现为透明物体后面的背景的深度值或大面积的深度缺失.为了对深度图像中透明物体的深度缺失进行快速修复,提出一种基于语义分割和多尺度融合的透明物体深度图像快速修复的方法,使用轻量级实时语义分割预测出透明物体的遮罩,剔除深度场景图像中该部分的错误深度信息,对彩色图像和剔除错误信息后的深度图像进行多尺度的特征提取和特征融合,完成对透明物体的深度图像快速修复.本文在Clear Grasp数据集上算法进行了效果验证.该数据集包含了5万多组RGB-D图像.实验结果表明,文中方法对透明物体深度的修复在度量指标MAE,δ1.05δ1.25上,分别取得了0.027,72.98和98.04的结果,均优于现有方法,并且在效率上有较好的提升.

     

    Abstract: Transparent objects are common things in daily life and have unique visual characteristics. Sensors to accurately estimate their depth. In most cases, the depth information captured by visual 3D sensors appears transparent, the depth value of the background behind the object or a large area of depth loss. To quickly repair the depth loss of transparent objects in the depth image, a method for rapid repair of the depth image of transparent objects based on semantic segmentation and multi-scale fusion was proposed. Firstly, predicted the mask of the transparent object by light real-time semantic segmentation. Secondly, we removed the wrong depth information in the mask fields of the depth scene image. Finally, we performed multi-scale feature extraction and feature fusion on the color image and the error-removed depth image. Quickly completed repair the depth image of transparent objects. Experimental results show that the proposed method achieves results of 0.027, 72.98, and 98.04 on the measurement MAE, δ1.05, and δ1.25 for the depth repairment of transparent objects, which are better than the existing methods.

     

/

返回文章
返回