Abstract:
Motion synthesis for humanoid characters, aimed at generating realistic and natural motion sequences that can respond to user input, has been a long-lasting challenge in virtual reality and character animation. Motion control policies solve joint torques based on user input constraints, update the character state using existing physics engines, and synthesize motion sequences that satisfy user input constraints while ensuring physical realism. In recent years, deep reinforcement learning has attracted researchers' attention due to their outstanding performance in sequential decision-making and interactive tasks, offering a new avenue for learning control policies for humanoid characters based on physics engines. First, the theoretical basis of character modeling, simulation and reinforcement learning is reviewed, as well as the mainstream deep reinforcement learning algorithms applied to single humanoid character motion control policy learning. Subsequently, based on the fundamental elements of reinforcement learning, the application design of research on humanoid character motion control policy learning is introduced. Finally, recent research advances and challenges faced are summarized, and the future development trends of learning methods for humanoid character motion control strategies are discussed.