高级检索
刘冰, 王甜甜, 付平, 孙少伟, 李永强. 统一分级图神经网络的协同显著性检测方法[J]. 计算机辅助设计与图形学学报, 2023, 35(7): 1010-1019. DOI: 10.3724/SP.J.1089.2023.19503
引用本文: 刘冰, 王甜甜, 付平, 孙少伟, 李永强. 统一分级图神经网络的协同显著性检测方法[J]. 计算机辅助设计与图形学学报, 2023, 35(7): 1010-1019. DOI: 10.3724/SP.J.1089.2023.19503
Liu Bing, Wang Tiantian, Fu Ping, Sun Shaowei, Li Yongqiang. Co-Saliency Detection Based on Unified Hierarchical Graph Neural Network[J]. Journal of Computer-Aided Design & Computer Graphics, 2023, 35(7): 1010-1019. DOI: 10.3724/SP.J.1089.2023.19503
Citation: Liu Bing, Wang Tiantian, Fu Ping, Sun Shaowei, Li Yongqiang. Co-Saliency Detection Based on Unified Hierarchical Graph Neural Network[J]. Journal of Computer-Aided Design & Computer Graphics, 2023, 35(7): 1010-1019. DOI: 10.3724/SP.J.1089.2023.19503

统一分级图神经网络的协同显著性检测方法

Co-Saliency Detection Based on Unified Hierarchical Graph Neural Network

  • 摘要: 协同显著性检测指从一组相关图像集中识别出共同出现且显著的物体,其难点是如何挖掘与利用图像帧内、帧间的显著性线索.文中提出一种统一分级图神经网络的协同显著性检测方法.首先利用超像素分割算法将图像分割,并提取图像帧内分级显著性特征构建图模型;然后挖掘图像帧间分级显著性图嵌入,形成统一的二维分级特征体系;最后充分利用图像帧内和图像帧间的线索,提出几何注意力模块.在iCoSeg数据集上的消融实验结果表明,所提出的统一分级图神经网络中各个模块均是有效的;所提方法基于iCoSeg数据集测试的最大F-measure、平均绝对误差以及S-measure分别为0.848 6,0.107 6和0.813 4,可以媲美或优于其他9种对比方法,最终获得的显著性图的高亮一致性和边缘均得到明显的改善.

     

    Abstract: Co-saliency detection aims to identify the common and salient objects from a group of relevant images. The main challenge for co-saliency detection is how to mine and exploit the saliency cues of both intra-image and inter-image. A novel unified hierarchical neural network is presented. Firstly, the images are segmented by the superpixel segmentation algorithm, and the intra-image hierarchical saliency features are extracted to construct a graph model. Secondly, hierarchical salient graph embedding of the inter-image is mined to form a unified two-dimensional hierarchical feature system. Finally, a geometric attention module is further proposed in order to make full use of the intra-image and inter-image cues. The ablation experiments on the iCoSeg dataset show that each module in the proposed unified hierarchical neural network is effective. The maximum F-measure, mean absolute error and S-measure obtained with the proposed method on the iCoSeg dataset are 0.848 6, 0.107 6 and 0.813 4, respectively, which are comparable to or better than those with other 9 control methods. The highlight consistency and edges of the final obtained saliency map are significantly improved.

     

/

返回文章
返回