Advanced Search
Kai Yao, Li Liu, Xiaodong Fu, Lijun Liu, Wei Peng. Absolute and Relative Depth Fusion for 3D Multi-Person Pose Estimation[J]. Journal of Computer-Aided Design & Computer Graphics. DOI: 10.3724/SP.J.1089.2024-00333
Citation: Kai Yao, Li Liu, Xiaodong Fu, Lijun Liu, Wei Peng. Absolute and Relative Depth Fusion for 3D Multi-Person Pose Estimation[J]. Journal of Computer-Aided Design & Computer Graphics. DOI: 10.3724/SP.J.1089.2024-00333

Absolute and Relative Depth Fusion for 3D Multi-Person Pose Estimation

  • To address the problem of inaccurate scale representation, imprecise pose recovery, and inadequate depth fusion in 3D multi-person pose estimation from monocular images, an absolute and relative depth fusion method is proposed. Firstly, human instance detection was employed to generate multiple human instances and extract the 2D coordinates of dual-root joints. Guided by neck and pelvis coordinates, absolute depth features were extracted for multi-person absolute depth estimation. Then, a diffusion model-based relative depth estimation module was constructed to capture relative depth information and spatial relationships between single-person joints, obtaining multiple single-person root-relative depths and relative 3D poses. Finally, the coordinate cascade and perspective camera model were combined to perform 3D pose fusion on the absolute root depths and root-relative 3D poses of multiple people, generating final 3D multi-person pose. The experimental results show that the proposed method reduces the mean per joint position error by 3.7% on Human3.6M and MuPoTs-3D datasets compared with existing methods, and the proportion of correct 3D keypoints is increased by 2.2 and 2.5 percentage points. It yields precise 3D multi-person pose estimation results, and the qualitative results on the COCO dataset also show that the method has robust generalization.
  • loading

Catalog

    Turn off MathJax
    Article Contents

    /

    DownLoad:  Full-Size Img  PowerPoint
    Return
    Return