高级检索

基于双侧通道特征融合的三维姿态迁移网络

DSFFNet: Dual-Side Feature Fusion Network for 3D Pose Transfer

  • 摘要: 为了解决现有方法中, 姿态特征在前向传播过程中的姿态失真问题, 本文提出了一种基于双侧通道特征融合的姿态迁移网络. 首先, 通过姿态编码器从源网格中提取固定长度的姿态编码, 并将其和目标顶点组合成混合特征; 然后, 设计了一种特征融合自适应实例归一化模块, 该模块可同时处理姿态和身份特征, 使姿态特征在逐层前向传播中得到补偿, 从而解决姿态失真问题; 最后, 使用该模块构成的网格解码器, 逐步将姿态迁移到目标网格上. 在SMPL、SMAL、FAUST和MultiGarment数据集上的实验结果表明, 本文的方法在保持较小网络结构的同时, 具有更强的姿态迁移能力和更快的收敛速度, 成功解决了姿态失真问题, 同时能够适应不同顶点数的网格. 本文方法的代码可用: https://github.com/YikiDragon/DSFFNet

     

    Abstract: To solve the problem of pose distortion in the forward propagation of pose features in existing methods, this paper proposes a Dual-Side Feature Fusion Network for pose transfer (DSFFNet). Firstly, a fixed-length pose code is extracted from the source mesh by a pose encoder and combined with the target vertices to form a mixed feature; Then, a Feature Fusion Adaptive Instance Normalization module (FFAdaIN) is designed, which can process both pose and identity features simultaneously, so that the pose features can be compensated in layer-by-layer forward propagation, thus solving the pose distortion problem; Finally, using the mesh decoder composed of this module, the pose are gradually transferred to the target mesh. Experimental results on SMPL, SMAL, FAUST and MultiGarment datasets show that DSFFNet successfully solves the pose distortion problem while maintaining a smaller network structure with stronger pose transfer capability and faster convergence speed, and can adapt to meshes with different numbers of vertices. Code is available at https://github.com/YikiDragon/DSFFNet

     

/

返回文章
返回