Abstract:
Motion synthesis for humanoid characters, aimed at generating realistic and natural motion sequences that can respond to user input, has long been a formidable challenge in the fields of virtual reality and character animation. Motion control policies address this challenge by calculating joint torques based on user input constraints, updating the character state using existing physics engines, and synthesizing motion sequences that not only meet user input constraints but also ensure physical realism. In recent years, deep reinforcement learning has garnered significant attention from researchers due to its exceptional performance in sequential decision-making and interactive tasks, providing a novel approach for learning control policies for humanoid characters grounded in physics engines. This paper reviews the advancements in motion control policy learning for humanoid characters and introduces relevant research from both theoretical foundations and practical design perspectives. In terms of practical designs, existing works are examined from four key aspects: state representation, reward function design, control policy design, and the physical simulation engine employed, all based on the fundamental elements of deep reinforcement learning. Furthermore, a comprehensive analysis of a general technical framework is conducted, highlighting potential directions for extending control policies. The specific application of motion control policies for humanoid characters is discussed using practical problems as case studies. Finally, a summary of the current research status is provided, indicating that leveraging extensive motion capture data to enhance the depth and breadth of motion control policies represents a promising future research direction. The paper also outlines the prospects for the development of motion control policy learning for humanoid characters, particularly in the areas of multimodal perception and control, world model learning, and embodied intelligence.