Advanced Search
Shuchao Song, Yiqiang Chen, Hanchao Yu, Yingwei Zhang, Xiaodong Yang. Review of Human-centered Explainable AI in Healthcare[J]. Journal of Computer-Aided Design & Computer Graphics. DOI: 10.3724/SP.J.1089..2024-00052
Citation: Shuchao Song, Yiqiang Chen, Hanchao Yu, Yingwei Zhang, Xiaodong Yang. Review of Human-centered Explainable AI in Healthcare[J]. Journal of Computer-Aided Design & Computer Graphics. DOI: 10.3724/SP.J.1089..2024-00052

Review of Human-centered Explainable AI in Healthcare

  • With the development of Artificial Intelligence (AI), "black box" models have demonstrated significant capabilities that now approach, or even surpass, human performance. However, ensuring the explainability of AI is crucial for users to trust and understand its applications in their daily lives, particularly in high-risk scenarios like intelligent healthcare. Although previous research has introduced numerous direct and post-hoc explainable AI methods, many of them adhere to a "one-fits-all" approach, disregarding the multidimensional understanding and trust requirements of diverse users in different contexts. In recent years, there has been growing attention from researchers worldwide towards human-centered explainable AI, which aims to provide explainable analyses of AI models based on the specific needs of users. Therefore, this article reviews existing human-centered explainable AI methods and systems, specifically emphasizing the intelligent healthcare domain. It proposes a systematic approach to exploring and identifying explainability needs from three aspects: decision time cost, user expertise, and diagnosis workflow. Furthermore, practical suggestions are provided on how to design explainable medical diagnostic systems.
  • loading

Catalog

    Turn off MathJax
    Article Contents

    /

    DownLoad:  Full-Size Img  PowerPoint
    Return
    Return