Advanced Search
MOU Yong-wei, ZHANG Xin-jie, REN Han-shi, ZHANG Jia-jing, SUN Shu-sen. A Channel Multi-Scale Fusion Network for Scene Depth Map Super-Resolution[J]. Journal of Computer-Aided Design & Computer Graphics, 2023, 35(1): 37-47. DOI: 10.3724/SP.J.1089.2023.19328
Citation: MOU Yong-wei, ZHANG Xin-jie, REN Han-shi, ZHANG Jia-jing, SUN Shu-sen. A Channel Multi-Scale Fusion Network for Scene Depth Map Super-Resolution[J]. Journal of Computer-Aided Design & Computer Graphics, 2023, 35(1): 37-47. DOI: 10.3724/SP.J.1089.2023.19328

A Channel Multi-Scale Fusion Network for Scene Depth Map Super-Resolution

  • To overcome the limitations of low resolution or low quality for scene depth maps captured by consumer depth cameras,a scene depth map super-resolution network CMSFN is proposed based on channel multi-scale fusion which is guided by the input high-resolution color image.CMSFN adopts the multi-scale pyramid structure for effectively using its multi-scale information of scene depth map. For each level of pyramid, the resolution of depth map can be improved by channel multi-scale up-sampling operation and residual learning. Firstly, the depth feature map and corresponding color feature map are fused through densely connected blocks at each level of super-resolution network, so that the color-depth features can be reused and fused the structure information of underlying scenes. Secondly, the fused depth feature map can be divided into multi-scale channels, which can obtain different sizes of network receptive fields and also capture different scales of scene feature information. Finally, the global and local network residual structures are added to our CMSFN, which can alleviate the disappearance of network gradient while recovering igh-frequency residual information of scene depth map. For group A of Middlebury dataset, the average root mean square error of our CMSFN is 1.33, which is reduced by 6.99% and 26.92% respectively if comparing with MFR and PMBANet networks. For group B of Middlebury dataset, the average root mean square error of CMSFN is 1.41, which is reduced by 9.03% and 17.05% respectively if comparing with MFR and PMBANet networks. Experimental results illustrate that CMSFN method can recover the structural information of scene depth map effectively.
  • loading

Catalog

    Turn off MathJax
    Article Contents

    /

    DownLoad:  Full-Size Img  PowerPoint
    Return
    Return