Abstract:
Aiming at the problem that existing grasping detection networks require a large amount of labeled data for training and are difficult to adapt to new object grasping detection, a 6DoF grasping position detection method is proposed for stacked scenes. The method consists of a library of model grasping pose templates, an object selection network and a grasping mapping. Based on the geometric appearance features and the force closure principle, a set of 6DoF grasping poses satisfying the force closure are generated on the object model to construct a grasping pose template library; then, the original field attraction cloud is segmented, and the objects to be grasped are identified based on the visibility, occlusion, and confidence; then, based on the pose of the objects to be grasped in the scene, the grasping poses in the template library are mapped onto the objects; finally, the nearest and non-colliding 6DoF grasping poses are selected and mapped with the objects to be grasped; and the nearest center-of-mass and non-colliding 6DoF grasping pose is selected. Finally, the closest collision-free 6DoF grasping pose to the center of mass of the object to be grasped is selected to achieve stable grasping. The experiments are conducted in six types of object stacking scenes, and the results show that the method achieves an average success rate of 96.2%, which is 5-20 percentage points higher than that of PointNetGPD.