高级检索
张建伟, 张旭斌, 徐宇扬, 陈为. 融合空间先验的医学影像分割神经网络模型[J]. 计算机辅助设计与图形学学报, 2021, 33(8): 1287-1294. DOI: 10.3724/SP.J.1089.2021.18652
引用本文: 张建伟, 张旭斌, 徐宇扬, 陈为. 融合空间先验的医学影像分割神经网络模型[J]. 计算机辅助设计与图形学学报, 2021, 33(8): 1287-1294. DOI: 10.3724/SP.J.1089.2021.18652
Zhang Jianwei, Zhang Xubin, Xu Yuyang, Chen Wei. Spatial Prior-Embedded Neural Networks for Medical Image Segmentation[J]. Journal of Computer-Aided Design & Computer Graphics, 2021, 33(8): 1287-1294. DOI: 10.3724/SP.J.1089.2021.18652
Citation: Zhang Jianwei, Zhang Xubin, Xu Yuyang, Chen Wei. Spatial Prior-Embedded Neural Networks for Medical Image Segmentation[J]. Journal of Computer-Aided Design & Computer Graphics, 2021, 33(8): 1287-1294. DOI: 10.3724/SP.J.1089.2021.18652

融合空间先验的医学影像分割神经网络模型

Spatial Prior-Embedded Neural Networks for Medical Image Segmentation

  • 摘要: 由于现有神经网络方法泛化性能的局限性、医学影像参差不齐的质量以及肿瘤的不规则性和浸润性,应用神经网络全自动分割方法的效果无法令人满意.为了充分地利用不同图像特有的信息,提出融合空间信息的先验嵌入网络的新范式.在神经网络中引入基于图像空间位置的先验信息引导模型聚焦于病灶区域,学习肿瘤的判别性特征并排除无关信息,从而增强模型对于特征的选择能力并提高分割精度.使用医学图像分割框架2D U-Net和3D nnU-Net分别作为主干网络,在肝肿瘤分割任务上采用LiTS数据进行实验.经过5折交叉验证,先验嵌入网络在训练集上的分割精度比2D U-Net提高22.4%;在测试集上比集成式nnU-Net提高1.2%,比非集成式nnU-Net提高4.4%.

     

    Abstract: The performance of neural network-based methods for medical image segmentation is still unsat-isfactory,mainly due to the limited generalization of neural networks,the unbalanced quality of medical images,and irregularities and infiltration of tumors.To sufficiently utilize image-specific information,we propose prior-embedded networks(PEN).By introducing image-specific spatial priors into neural networks,PEN focuses on lesion regions to learn discriminative features and ignore unrelated information,and there-fore extracts crucial features for improving segmentation performance.Two medical image segmentation frameworks,2D U-Net and 3D nnU-Net,are used as backbone networks,whose performance is evaluated in the liver tumor segmentation task on the LiTS data set.With five-fold cross-validation,the results show that the PEN significantly improves segmentation performance by 22.4%on the training set compared with U-Net,and by 1.2%and 4.4%on the test set compared with ensemble nnU-Net and single nnU-Net,respec-tively.

     

/

返回文章
返回