高级检索
宋淑超, 陈益强, 于汉超, 张迎伟, 杨晓东. 以人为本的可解释智能医疗综述[J]. 计算机辅助设计与图形学学报. DOI: 10.3724/SP.J.1089..2024-00052
引用本文: 宋淑超, 陈益强, 于汉超, 张迎伟, 杨晓东. 以人为本的可解释智能医疗综述[J]. 计算机辅助设计与图形学学报. DOI: 10.3724/SP.J.1089..2024-00052
Shuchao Song, Yiqiang Chen, Hanchao Yu, Yingwei Zhang, Xiaodong Yang. Review of Human-centered Explainable AI in Healthcare[J]. Journal of Computer-Aided Design & Computer Graphics. DOI: 10.3724/SP.J.1089..2024-00052
Citation: Shuchao Song, Yiqiang Chen, Hanchao Yu, Yingwei Zhang, Xiaodong Yang. Review of Human-centered Explainable AI in Healthcare[J]. Journal of Computer-Aided Design & Computer Graphics. DOI: 10.3724/SP.J.1089..2024-00052

以人为本的可解释智能医疗综述

Review of Human-centered Explainable AI in Healthcare

  • 摘要: 随着人工智能 (Artificial Intelligence, AI) 的高速发展,“黑盒” 模型已逐渐展示出逼近甚至超越人类的能力, 而其可解释性是用户在应用中信任和理解 AI 的关键基础, 尤其在智能医疗等高风险应用场景中. 虽然已有工作提供了大量事前和事后的 AI 可解释方法, 但大都采用 “一套适用所有(one-fits-all)” 的解决思路, 未考虑不同用户在不同场景下多维度的理解和信任需求. 人本思想驱动的 AI 可解释方法能够针对用户实际需求对 AI 模型进行可解释分析, 近年来逐渐受到国内外研究学者的关注. 因此, 本文聚焦智能医疗应用, 通过对现有以人为本的 AI 可解释方法及系统进行充分调研, 提出一种从决策时间花费、用户专业度和诊疗工作流程三方面梳理和定位可解释需求的系统性方法, 并为如何设计可解释的医疗辅助诊断系统提出实用建议.

     

    Abstract: With the development of Artificial Intelligence (AI), "black box" models have demonstrated significant capabilities that now approach, or even surpass, human performance. However, ensuring the explainability of AI is crucial for users to trust and understand its applications in their daily lives, particularly in high-risk scenarios like intelligent healthcare. Although previous research has introduced numerous direct and post-hoc explainable AI methods, many of them adhere to a "one-fits-all" approach, disregarding the multidimensional understanding and trust requirements of diverse users in different contexts. In recent years, there has been growing attention from researchers worldwide towards human-centered explainable AI, which aims to provide explainable analyses of AI models based on the specific needs of users. Therefore, this article reviews existing human-centered explainable AI methods and systems, specifically emphasizing the intelligent healthcare domain. It proposes a systematic approach to exploring and identifying explainability needs from three aspects: decision time cost, user expertise, and diagnosis workflow. Furthermore, practical suggestions are provided on how to design explainable medical diagnostic systems.

     

/

返回文章
返回