高级检索

自适应特征融合与cosIoU-NMS的目标检测算法

Object Detection Algorithm Based on Adaptive Feature Fusion and Cosine Similarity IoU-NMS

  • 摘要: 针对经典的有锚框检测算法RetinaNet、无锚框检测算法FCOS等目标检测算法中存在漏检以及重复检测的问题, 提出一种自适应特征融合与cosIoU-NMS的目标检测算法. 首先采用自适应特征融合模块对多尺度特征中相邻3层特征加权融合,获取丰富的上下文信息和空间信息; 然后采用cosIoU计算检测框之间的余弦相似度与重叠面积, 使目标定位更准确; 最后使用cosIoU-NMS代替Greedy-NMS抑制置信度分数较高的冗余框, 保留更准确的检测结果. 以RetinaNet和FCOS为基准, 在PASCAL VOC数据集上的实验结果表明, 所提算法检测精度达到81.3%和82.3%, 分别提升2.8%和1.2%; 在MS COCO数据集上检测精度达到36.8%和38.0%, 分别提升1.0%和0.7%; 该算法能够增强特征表征能力, 筛除多余的检测框, 有效地提高检测性能.

     

    Abstract: To address the problem of missing or repeating detection in the classical anchor-based RetinaNet, anchor-free FCOS, and other object detection algorithms, this paper proposes a novel object detection algorithm based on adaptive feature fusion and cosIoU-NMS. Firstly, the algorithm leverages an adaptive feature fusion module to obtain rich context and spatial information by weighted fusion of adjacent three-layer features in multi-scale features. Then, the cosIoU, which measures the cosine similarity and overlap area between detection boxes, is calculated to locate the target more precisely. Finally, by replacing Greedy-NMS with our cosIoU-NMS, redundant boxes with high confidence scores can be effectively suppressed, and thus retaining more accurate detection results. Based on RetinaNet and FCOS, the experimental results on the PASCAL VOC dataset demonstrate the detection accuracy of our proposed algorithm achieves 81.3% and 82.3%, with relative gains of 2.8% and 1.2%, respectively. On the MS COCO dataset, the accuracy reaches 36.8% and 38.0%, which is increased by 1.0% and 0.7%, respectively. The algorithm can improve the capability of feature representation, remove redundant detection boxes, and significantly boost the detection performance.

     

/

返回文章
返回