2D-to-3D Method via Semantic Depth Transfer
-
Graphical Abstract
-
Abstract
The research of depth estimation from a single monocular image is promoted by available massive video data.Under the assumption that photometrically similar images likely have similar depth fields, in this paper we propose a novel 2D-to-3D method based on semantic segmentation and depth transfer to estimate depth information from a single input image.Firstly, semantic segmentation of the scene is performed and the semantic labels are used to guide the depth transfer.Secondly, pixel-topixel correspondences between the input image and all candidates are estimated through SIFT flow. Then, each candidate depth is warped by SIFT flow to be a rough approximation of the input's depth map.Finally, depth is assigned to different objects based on semantic labels guided depth fusion.The experimental results on Make 3D datasets demonstrate that our algorithm outperforms the existing depth transfer methods where the average log error and relative error were reduced by 0.03and 0.02 respectively.
-
-