Advanced Search
Cheng Jingming, Xie Wenjun, Shen Ziqi, Li Lin, Liu Xiaoping. Multimodal Human Motion Synchronization Dataset[J]. Journal of Computer-Aided Design & Computer Graphics, 2022, 34(11): 1713-1722. DOI: 10.3724/SP.J.1089.2022.19194
Citation: Cheng Jingming, Xie Wenjun, Shen Ziqi, Li Lin, Liu Xiaoping. Multimodal Human Motion Synchronization Dataset[J]. Journal of Computer-Aided Design & Computer Graphics, 2022, 34(11): 1713-1722. DOI: 10.3724/SP.J.1089.2022.19194

Multimodal Human Motion Synchronization Dataset

  • Human motion dataset is an important foundation for researches such as motion data denoising, motion editing, motion synthesis, etc. In order to support more generic studies of multimodal motion data fusion, designing and collecting a public multimodal human motion data set is an urgent problem. First, the acquisition environment is designed for precise motion data collected by sensor-based motion capture devices, rough motion data collected by body sensing devices, and local inertial data collected by inertial measurement units (IMU). Then, temporal synchronization among equipment is applied based on network time protocol (NTP) and spatial synchronization is applied among multi modal data. A full body motion dataset named HFUT-MMD is captured, which contains 6 971 568 frames in 6 types from 12 actors/actresses. The experimental results on the HFUT-MMD dataset using the existing algorithm show that the low precision motion data can be optimized to obtain the motion data similar to the accurate motion data, which corroborates the consistency between the modal data.
  • loading

Catalog

    Turn off MathJax
    Article Contents

    /

    DownLoad:  Full-Size Img  PowerPoint
    Return
    Return