Explaining Decision-Tree Predictions by Addressing Potential Conflicts between Predictions and Plausible Expectations

Sameen Maruf, Ingrid Zukerman, Ehud Reiter, Gholamreza Haffar

Research output: Chapter in Book/Report/Conference proceedingPublished conference contribution

14 Downloads (Pure)

Abstract

We offer an approach to explain Decision Tree (DT) predictions by addressing potential conflicts between aspects of these predictions and plausible expectations licensed by background information. We define four types of conflicts, operationalize their identification, and specify explanatory schemas that address them. Our human evaluation focused on the effect of explanations on users’ understanding of a DT’s reasoning and their willingness to act on its predictions. The results show that (1) explanations that address potential conflicts are considered at least as good as baseline explanations that just follow a DT path; and (2) the conflict based explanations are deemed especially valuable when users’ expectations disagree with the DT’s predictions.
Original languageEnglish
Title of host publicationThe 14th International Conference on Natural Language Generation
Subtitle of host publicationProceedings of the Conference
Pages114–127
Number of pages14
Publication statusPublished - 31 Aug 2021
EventThe 14th International Conference on Natural Language Generation - Virtual, Aberdeen, United Kingdom
Duration: 20 Sept 202124 Sept 2021
Conference number: 14
https://inlg2021.github.io/index.html

Conference

ConferenceThe 14th International Conference on Natural Language Generation
Country/TerritoryUnited Kingdom
CityAberdeen
Period20/09/2124/09/21
Internet address

Fingerprint

Dive into the research topics of 'Explaining Decision-Tree Predictions by Addressing Potential Conflicts between Predictions and Plausible Expectations'. Together they form a unique fingerprint.

Cite this