Influence of context on users’ views about explanations for decision-tree predictions

Sameen Maruf, Ingrid Zukerman* (Corresponding Author), Ehud Reiter, Gholamreza Haffari

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

1 Citation (Scopus)
2 Downloads (Pure)

Abstract

We consider the influence of two types of contextual information, background information available to users and users’ goals, on users’ views and preferences regarding textual explanations generated for the outcomes predicted by Decision Trees (DTs). To investigate the influence of background information, we generate contrastive explanations that address potential conflicts between aspects of DT predictions and plausible expectations licensed by background information. We define four types of conflicts, operationalize their identification, and specify explanatory schemas that address them. To investigate the influence of users’ goals, we employ an interactive setting where given a goal and an initial explanation for a predicted outcome, users select follow-up questions, and assess the explanations that answer these questions. Here, we offer algorithms to generate explanations that address six types of follow-up questions.
The main result from both user studies is that explanations which have a contrastive aspect about a predicted class are generally preferred by users. In addition, the results from the first study indicate that these explanations are deemed especially valuable when users’ expectations differ from predicted outcomes; and the results from the second study indicate that contrastive explanations which describe how to change a predicted outcome are particularly well regarded in terms of helping users’ achieve this goal, and they are also popular in terms of helping users’ achieve other goals.
Original languageEnglish
Article number101483
Number of pages39
JournalComputer Speech & Language
Volume81
Early online date24 Feb 2023
DOIs
Publication statusPublished - Jun 2023

Bibliographical note

This research was supported in part by grant DP190100006 from the Australian Research Council. Ethics approval for the user studies was obtained from Monash University Human Research Ethics Committee (ID-24208). We thank Marko Bohanec, one of the creators of the Nursery dataset, for helping us understand the features and their values. We are also grateful to the anonymous reviewers for their helpful comments.

Data Availability Statement

Data will be made available on request.

Keywords

  • Explainable AI
  • Generating textual explanations
  • Taking context into account
  • Contrastive explanations
  • Decision trees

Fingerprint

Dive into the research topics of 'Influence of context on users’ views about explanations for decision-tree predictions'. Together they form a unique fingerprint.

Cite this