Advanced Search
Yuan Hongxing, Wu Shaoqun, Yu Huiqing, Zhu Renxiang, Zhuge Xia. 2D-to-3D Method via Semantic Depth Transfer[J]. Journal of Computer-Aided Design & Computer Graphics, 2014, 26(1): 72-80.
Citation: Yuan Hongxing, Wu Shaoqun, Yu Huiqing, Zhu Renxiang, Zhuge Xia. 2D-to-3D Method via Semantic Depth Transfer[J]. Journal of Computer-Aided Design & Computer Graphics, 2014, 26(1): 72-80.

2D-to-3D Method via Semantic Depth Transfer

  • The research of depth estimation from a single monocular image is promoted by available massive video data.Under the assumption that photometrically similar images likely have similar depth fields, in this paper we propose a novel 2D-to-3D method based on semantic segmentation and depth transfer to estimate depth information from a single input image.Firstly, semantic segmentation of the scene is performed and the semantic labels are used to guide the depth transfer.Secondly, pixel-topixel correspondences between the input image and all candidates are estimated through SIFT flow. Then, each candidate depth is warped by SIFT flow to be a rough approximation of the input's depth map.Finally, depth is assigned to different objects based on semantic labels guided depth fusion.The experimental results on Make 3D datasets demonstrate that our algorithm outperforms the existing depth transfer methods where the average log error and relative error were reduced by 0.03and 0.02 respectively.
  • loading

Catalog

    Turn off MathJax
    Article Contents

    /

    DownLoad:  Full-Size Img  PowerPoint
    Return
    Return