Abstract:
In order to efficiently perceive objects in the exploration of unknown indoor scenes, an autonomous exploration and object perception algorithm for a robot is proposed. Using deep reinforcement learning, the robot learns to use the layout rules and semantic information of the scene to obtain a more efficient and high-quality exploration strategy through interaction with the environment. The algorithm uses a modular framework to conquer the difficulty in reinforcement learning training, which is divided into simultaneous localization and mapping module, global exploration module, path planning module and local exploration module. The simultaneous localization and mapping module construct a map based on the data obtained by sensors. Then the global exploration module decides a long term goal based on the map to guide the robot to the area to be explored. Next, the path planning module is employed to generate a collision-free trajectory for robot navigation. And the local exploration module plans the orientation of the sensor on robot at each step based on the local map and updates the map. The comparative experiments are conducted in Gibson and Matterport3D public datasets with SC and ANS two advanced algorithms in Habitat simulation environment. The results show that the proposed algorithm has object perception rates of 0.942, 0.866, 0.652 and 0.506 in small, medium, large and extra-large scenes respectively, demonstrating good perception performance for scenes.