融合时空感知事件表征的人体姿态估计方法
Spatiotemporal-aware event representation for event-based human pose estimation
-
摘要: 针对事件相机数据时空特征利用不足的问题, 提出融合时空感知事件表征的人体姿态估计方法. 所提方法首先融合时间戳表征和事件点计数构建时空事件帧, 保留原始数据的时空动态特性; 接着设计时空感知的特征提取模块, 通过三维卷积和时序注意力机制分层捕获时空特; 最后引入动态权重策略自适应融合多尺度时空特征, 优化关节点定位精度. 在DHP19数据集上的实验结果表明, 所提方法的平均关节位置误差为5.56, 较基线方法降低1.82, 验证了时空特征融合的有效性.Abstract: To address the underutilization of spatiotemporal features in event data, a spatiotemporal-aware event rep-resentation fusion method is proposed for human pose estimation. The method constructs spatiotemporal event frames by in-tegrating timestamp encoding and event counting, preserving temporal dynamics; then designs a spatiotemporal feature extraction module with 3D convolutions and temporal attention to hierar-chically capture motion patterns; finally adopts dynamic weighting to adaptively fuse multi-scale features for joint localization. Experiments on DHP19 dataset demonstrate a mean joint error of 5.56, outperforming baselines by 1.82, proving the efficacy of spatiotemporal fusion.