Evaluation of Human-Understandability of Global Model Explanations using Decision Tree

Adarsa Sivaprasad, Ehud Reiter, Nir Oren, Nava Tintarev

Research output: Working paperPreprint

Abstract

In explainable artificial intelligence (XAI) research, the predominant focus has been on interpreting models for experts and practitioners. Model agnostic and
local explanation approaches are deemed interpretable and sufficient in many applications. However, in domains like healthcare, where end users are patients without AI or domain expertise, there is an urgent need for model explanations that are more comprehensible and instil trust in the model’s operations. We hypothesise that generating model explanations that are narrative, patient-specific and global (holistic of the model) would enable better understandability and enable decision-making. We test this using a decision tree model to generate both local and global explanations for patients identified as having a high risk of coronary heart disease. These explanations are presented to non-expert users. We find a strong individual preference for a specific type of
explanation. The majority of participants prefer global explanations, while a smaller group prefers local explanations. A task based evaluation of mental models of these participants provide valuable feedback to enhance narrative global explanations. This, in turn, guides the design of health informatics systems that are both trustworthy and actionable.
Original languageEnglish
PublisherArXiv
Number of pages20
Publication statusPublished - 18 Sept 2023

Bibliographical note

Acknowledgement
We would like to thank Dr. Sameen Maruf and Prof. Ingrid Zukerman for generously sharing their expertise in utilizing the dataset, continuous support and valuable feedback in designing the experiment. We thank Nikolay Babakov and Prof. Alberto José Bugarín Diz for their HumanUnderstandability of Global Model Explanations 13 feedback throughout the development of this research. We also thank the anonymous reviewers for their feedback which has significantly improved this work. A. Sivaprasad is ESR in the NL4XAI project which has received funding from the European Union’s Horizon 2020 research and innovation programme under the Marie Sklodowska-Curie Grant Agreement No. 860621.

Keywords

  • Global Explanation
  • End-user Understandability
  • Health Informatics

Fingerprint

Dive into the research topics of 'Evaluation of Human-Understandability of Global Model Explanations using Decision Tree'. Together they form a unique fingerprint.

Cite this