高级检索
郑瑞坤, 刘耿欣, 胡瑞珍. 半监督角色动画风格迁移[J]. 计算机辅助设计与图形学学报. DOI: 10.3724/SP.J.1089.2023-00385
引用本文: 郑瑞坤, 刘耿欣, 胡瑞珍. 半监督角色动画风格迁移[J]. 计算机辅助设计与图形学学报. DOI: 10.3724/SP.J.1089.2023-00385
Ruikun Zheng, Gengxin Liu, Ruizhen Hu. Semi-supervised Character Motion Style Transfer[J]. Journal of Computer-Aided Design & Computer Graphics. DOI: 10.3724/SP.J.1089.2023-00385
Citation: Ruikun Zheng, Gengxin Liu, Ruizhen Hu. Semi-supervised Character Motion Style Transfer[J]. Journal of Computer-Aided Design & Computer Graphics. DOI: 10.3724/SP.J.1089.2023-00385

半监督角色动画风格迁移

Semi-supervised Character Motion Style Transfer

  • 摘要: 现有的动画风格迁移方法大多依赖于少量带有风格标签的动画数据, 限制了它们泛化到未见过的、训练集分布之外的动画数据的能力. 其他任意风格迁移方法使用大规模无标签的动画数据来增强泛化能力, 但往往会损失风格一致性并产生不自然的动画. 针对上述问题, 本文提出了半监督动画风格迁移框架, 结合少量带有风格标签的动画数据与大规模无标签的动画数据, 实现了对于任意动画内容的特定风格迁移. 具体地, 本架构采用图卷积神经网络, 并通过设计相应的损失函数来保留无标签动画的内容和有标签动画的风格. 此外, 框架还使用了StyleNet融合模块和多级网络架构以提高动画风格化质量. 实验结果表明, 本文提出的方法与当前最先进的方法相比具有更好的泛化性与更高的生成质量, 并能更好保持风格一致性.

     

    Abstract: Most existing methods for motion style transfer rely on a small amount of motion data with style labels, limiting their ability to generalize to unseen motion data outside the training distribution. Other arbitrary motion style transfer methods use large-scale, unlabeled motion data to enhance generalization ability, but often suffer from a loss of style consistency, producing unnatural results. Aiming at the above problems, this paper proposes a semi-supervised motion style transfer framework that combines limited, style-labeled motion data with large-scale, unlabeled motion data to achieve style transfer for arbitrary motion content. Specifically, the framework adopts graph convolutional neural networks and designs corresponding loss functions to preserve the content of unlabeled motion and the style of labeled motion. Additionally, the framework enhances motion stylization quality by incorporating the StyleNet fusion module and multi-level network architecture. Experimental results demonstrate that the proposed framework has better generalization ability and higher generation quality compared to state-of-the-art methods while maintaining style consistency better.

     

/

返回文章
返回