Review of Human-Centered Explainable AI in Healthcare
-
Graphical Abstract
-
Abstract
With the development of Artificial Intelligence (AI), “black box” models have demonstrated significant capabilities that now approach, or even surpass, human performance. However, ensuring the explainability of AI is crucial for users to trust and understand its applications in their daily lives, particularly in high-risk scenarios like healthcare. Although previous research has introduced numerous direct and post-hoc explainable AI methods, many of them adhere to a “one-fits-all” approach, disregarding the multidimensional understanding and trust requirements of diverse users in different contexts. In recent years, there has been growing attention from researchers worldwide towards human-centered explainable AI, which aims to provide explainable analyses of AI models based on the specific needs of users. This article examines literature reviews published over the last five years at top-tier global conferences in the field of human-computer interaction, with a specific emphasis on healthcare. It reviews existing human-centered, explainable AI methods and systems used for computer-aided diagnosis, computer-aided treatment, and preventive disease warning. Based on this review, it explores and identifies explainability needs from three perspectives: decision time constraints, user expertise levels, and diagnosis workflow processes. Additionally, the article lists four classic user persona types along with respective examples and provides suggestions for designing explainable medical diagnostic systems, considering resource constraints, varying user needs across different stakeholders, and integration with existing clinical workflows.
-
-