高级检索
金怀平, 陶玉泉, 李振辉, 陶海波, 王彬, 薛飞跃. 基于多模态多实例学习的胃癌患者生存预测算法[J]. 计算机辅助设计与图形学学报. DOI: 10.3724/SP.J.1089.2023-00491
引用本文: 金怀平, 陶玉泉, 李振辉, 陶海波, 王彬, 薛飞跃. 基于多模态多实例学习的胃癌患者生存预测算法[J]. 计算机辅助设计与图形学学报. DOI: 10.3724/SP.J.1089.2023-00491
Huaiping Jin, Yuquan Tao, Zhenhui Li, Haibo Tao, Bin Wang, Feiyue Xue. Survival Prediction Algorithm for Gastric Cancer Patients Based on Multi-modal Multi-instance Learning[J]. Journal of Computer-Aided Design & Computer Graphics. DOI: 10.3724/SP.J.1089.2023-00491
Citation: Huaiping Jin, Yuquan Tao, Zhenhui Li, Haibo Tao, Bin Wang, Feiyue Xue. Survival Prediction Algorithm for Gastric Cancer Patients Based on Multi-modal Multi-instance Learning[J]. Journal of Computer-Aided Design & Computer Graphics. DOI: 10.3724/SP.J.1089.2023-00491

基于多模态多实例学习的胃癌患者生存预测算法

Survival Prediction Algorithm for Gastric Cancer Patients Based on Multi-modal Multi-instance Learning

  • 摘要: 生存预测对于胃癌患者的治疗具有重要意义。组织病理图像作为肿瘤诊断的金标准之一,近年来在肿瘤预后预测领域受到了广泛关注。然而,常规的生存预测方法往往只利用单一模态数据,忽视了不同数据之间的关联性和互补性。此外,大多数病理图像缺少像素级标签难以进行有效监督学习。为此,本文提出了一种基于多模态多实例学习的胃癌患者生存预测算法。该方法首先从临床数据和组织病理图像中提取特征,然后,采用基于全局感知的多实例学习方法提取高倍放大下的包级嵌入,同时使用平均池化方法得到低倍放大下的组织病理图像实例级嵌入。接着,通过多模态融合方法将包级嵌入、实例级嵌入和临床数据特征进行融合,以实现不同模态数据之间的信息交互和充分利用不同放大倍数下的下的图像信息。实验结果表明,与传统的单模态方法相比,本文方法在胃癌患者生存预测任务中获得了显著的性能提升。

     

    Abstract: Survival prediction is vital for treating gastric cancer patients. As one of the gold standards for tumor diagnosis, histopathological images have received much attention for tumor prognosis prediction in recent years. However, conventional survival prediction methods usually utilize only unimodal data, ignoring the correlation and complementarity between different data. Moreover, most pathology images lack pixel-level labels, which poses a great challenge to performing effective supervised learning. Therefore, this paper proposes a survival prediction algorithm for gastric cancer patients based on multi-modal multi-instance learning. The method first extracts features from clinical data and histopathology images, and then adopts global-aware multi-instance learning to extract bag-level embeddings under high magnification, while using the average pooling method to obtain instance-level embeddings of histopathology images under low magnification. Next, a multi-modal fusion approach is used to fuse the bag-level features, instance-level features, and clinical data features in order to achieve information interaction between different data and to fully utilize the image information under different magnifications. The experimental results show that, compared with traditional unimodal approaches, the proposed multi-modal multi-instance learning method significantly improves the survival prediction accuracy of gastric cancer patients.

     

/

返回文章
返回