高级检索

基于Transformer的乳腺组织病理图像自监督分类方法

Self-supervised Classification Method of Breast Histopathological Images Based on Transformer

  • 摘要: 基于自监督学习的乳腺组织病理图像分类可辅助病理医生对乳腺癌患者进行筛查。当前的自监督学习方法通过构造辅助任务,学习图像的特征表示。但是这种方式提取到的特征偏向于解决辅助任务,难以挖掘到病理图像自身的特征信息,进而影响模型在下游任务中的性能表现。为此,本文提出一种基于Transformer的乳腺组织病理图像自监督分类方法。借助卷积神经网络和视觉Transformer对病理图像的局部和全局信息的感知能力,设计了DenseSwinNet特征提取网络。同时,构建了一种基于聚类和自监督学习的分类器,用于将乳腺组织病理图像的局部和全局特征进行聚合,对其是否发生癌变作出预测。在面向乳腺组织病理图像分类任务的公开数据集Camelyon16上,本文所提方法的准确率、F1-Score和AUC分别达到了0.9016、0.857和0.9247。实验结果证明了所提方法可有效提升分类性能。此外,通过对模型关注区域进行可视化分析,证明了模型具有良好的可解释性。

     

    Abstract: Breast tissue pathology image classification, based on self-supervised learning, can assist pathologists in screening breast cancer patients. Current self-supervised learning methods learn feature representations of images by constructing auxiliary tasks. However, the features extracted in this way tend to solve auxiliary tasks, making it difficult to mine the characteristic information of the pathological image itself, thereby affecting the model's performance in downstream tasks. To address this issue, this article proposes a Transformer-based self-supervised classification method for breast tissue pathology images. The DenseSwinNet feature extraction network was designed to leverage the convolutional neural network and visual Transformer's ability to perceive local and global information of pathological images. Simultaneously, a classifier based on clustering and self-supervision was constructed to aggregate the local and global features of breast tissue pathology images and predict their potential cancerous status. On the publicly available Camelyon16 dataset for breast tissue pathology image classification tasks, the proposed method achieved an accuracy of 0.9016, an F1-Score of 0.857, and an AUC of 0.9247. Experimental results demonstrate the effectiveness of the proposed method in improving classification performance. Additionally, visual analysis of the model's focus area substantiates its interpretability.

     

/

返回文章
返回