基于双重注意力机制的多尺度指代目标分割方法
Multi-Scale Referring Image Segmentation Based on Dual Attention
-
摘要: 针对指代分割任务中视觉和语言间缺乏充分的跨模态交互、不同尺寸的目标空间和语义信息存在差异的问题,提出了基于双重注意力机制的多尺度指代目标分割方法.首先,利用语言表达中不同类型的信息关键词来增强视觉和语言特征的跨模态对齐,并使用双重注意力机制捕捉多模态特征间的依赖性,实现模态间和模态内的交互;其次,利用语言特征作为引导,从其他层次的特征中聚合与目标相关的视觉信息,进一步增强特征表示;然后利用双向ConvLSTM以自下而上和自上而下的方式逐步整合低层次的空间细节和高层次的语义信息;最后,利用不同膨胀因子的空洞卷积融合多尺度信息,增加模型对不同尺度分割目标的感知能力.此外,在UNC,UNC+,GRef和ReferIt基准数据集上进行实验,实验结果表明,文中方法在UNC,UNC+,GRef和ReferIt上的oIoU指标分别提高了1.81个百分点、1.26个百分点、0.84个百分点和0.32个百分点,广泛的消融研究也验证了所提方法中各组成部分的有效性.Abstract: This paper proposes a multi-scale referring image segmentation method based on dual attention to solve the problem of insufficient interaction between visual and linguistic modes, as well as different structural and semantic information required by objects of different sizes. Firstly, the dual attention mechanism is used to realize the intermodal and intramodal interaction between vision and text, which enhances the ability to align visual and linguistic features accurately by using different types of information words in the expression. Secondly, using language features as guidance, useful features are selected from other levels for information exchange to further enhance feature representation. Then, dual path ConvLSTM is used to fully integrate low-level visual details and high-level semantics from bottom-up and top-down paths. Finally, multi-scale information is fused by atrous spatial pyramid pooling, increasing the perception ability of the model for different scales. Experiments on the UNC, UNC+, GRef, and ReferIt reference data sets show that the proposed method oIoU improves by 1.81 percentage points on UNC, 1.26 percentage points on UNC+, 0.84 percentage points on GRef, and 0.32 percentage points on ReferIt. Extensive ablation studies have also validated the effectiveness of each component of our approach.