高级检索
梁礼明, 阳渊, 何安军, 董信, 吴健. 跨层Transformer与多尺度自适应融合的视网膜血管分割算法[J]. 计算机辅助设计与图形学学报. DOI: 10.3724/SP.J.1089.2023-00271
引用本文: 梁礼明, 阳渊, 何安军, 董信, 吴健. 跨层Transformer与多尺度自适应融合的视网膜血管分割算法[J]. 计算机辅助设计与图形学学报. DOI: 10.3724/SP.J.1089.2023-00271
LIANG, YANG, HE, DONG, WU. Cross-layer Transformer and Multi-scale Adaptive Fusion of Retinal Vascular Segmentation Algorithm[J]. Journal of Computer-Aided Design & Computer Graphics. DOI: 10.3724/SP.J.1089.2023-00271
Citation: LIANG, YANG, HE, DONG, WU. Cross-layer Transformer and Multi-scale Adaptive Fusion of Retinal Vascular Segmentation Algorithm[J]. Journal of Computer-Aided Design & Computer Graphics. DOI: 10.3724/SP.J.1089.2023-00271

跨层Transformer与多尺度自适应融合的视网膜血管分割算法

Cross-layer Transformer and Multi-scale Adaptive Fusion of Retinal Vascular Segmentation Algorithm

  • 摘要: 针对现有视网膜血管分割存在视盘误分割、主血管纹理模糊和微细分支血管断裂等问题,提出融合CLTransformer与跨尺度注意的视网膜血管分割算法。首先设计轻量化残差编解码模块用于编码和解码器部分,实现血管纹理特征的粗粒度提取;其次在编解码连接处采用多尺度特征选择模块,用于跨级融合粗粒度特征;再次在网络底部加入跨层Transformer模块,对深层语义信息交叉融合,以细化视网膜血管特征轮廓;最后使用融合损失函数监督算法的训练和测试。在数据集DRIVE、STARE和CHASE_DB1上进行实验,其准确率分别为97.10%、97.66%和97.62%,特异性分别为98.64%、99.03%和98.72%,F1分数分别为83.05%、84.07%和81.18%,本文算法总体性能优于现有大多数先进算法。

     

    Abstract: A retina vessel segmentation algorithm that fuses CLTransformer with cross-scale attention is proposed to address issues such as mis-segmentation of the optic disc, blurred main vessel texture, and microvascular branch breaks in existing methods. Firstly, a lightweight residual encoder-decoder module is designed for encoding and decoding, enabling coarse-grained extraction of vessel texture features. Secondly, a multi-scale feature selection module is employed at the encoder-decoder connection to fuse coarse-grained features across levels. Thirdly, a cross-layer transformer module is added at the bottom of the network to cross-fuse deep semantic information, refining vessel feature contours. Finally, a fusion loss function is used to supervise the training and testing of the algorithm. Experiments are conducted on the DRIVE, STARE, and CHASE_DB1 datasets, achieving accuracies of 97.10%, 97.66%, and 97.62%, specificities of 98.64%, 99.03%, and 98.72%, and F1 scores of 83.05%, 84.07%, and 81.18%, respectively. Overall, most state-of-the-art methods are outperformed by the proposed algorithm.

     

/

返回文章
返回