The impact of inconsistent human annotations on AI driven clinical decision making

Aneeta Sylolypavan, Derek Sleeman, Honghan Wu*, Malcolm Sim

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

18 Citations (Scopus)
4 Downloads (Pure)

Abstract

In supervised learning model development, domain experts are often used to provide the class labels (annotations). Annotation inconsistencies commonly occur when even highly experienced clinical experts annotate the same phenomenon (e.g., medical image, diagnostics, or prognostic status), due to inherent expert bias, judgments, and slips, among other factors. While their existence is relatively well-known, the implications of such inconsistencies are largely understudied in real-world settings, when supervised learning is applied on such ‘noisy’ labelled data. To shed light on these issues, we conducted extensive experiments and analyses on three real-world Intensive Care Unit (ICU) datasets. Specifically, individual models were built from a common dataset, annotated independently by 11 Glasgow Queen Elizabeth University Hospital ICU consultants, and model performance estimates were compared through internal validation (Fleiss’ κ = 0.383 i.e., fair agreement). Further, broad external validation (on both static and time series datasets) of these 11 classifiers was carried out on a HiRID external dataset, where the models’ classifications were found to have low pairwise agreements (average Cohen’s κ = 0.255 i.e., minimal agreement). Moreover, they tend to disagree more on making discharge decisions (Fleiss’ κ = 0.174) than predicting mortality (Fleiss’ κ = 0.267). Given these inconsistencies, further analyses were conducted to evaluate the current best practices in obtaining gold-standard models and determining consensus. The results suggest that: (a) there may not always be a “super expert” in acute clinical settings (using internal and external validation model performances as a proxy); and (b) standard consensus seeking (such as majority vote) consistently leads to suboptimal models. Further analysis, however, suggests that assessing annotation learnability and using only ‘learnable’ annotated datasets for determining consensus achieves optimal models in most cases.

Original languageEnglish
Article number26
Journalnpj Digital Medicine
Volume6
Issue number26
DOIs
Publication statusPublished - 21 Feb 2023

Bibliographical note

Funding Information:
We thank all the consultants from the QEUH who annotated the set of instances that formed an important part of the analysis described in this paper. We also acknowledge helpful discussions with Prof Hugh Montgomery (Faculty of Medical Sciences, UCL). H.W. is supported by Medical Research Council (MR/S004149/1, MR/S004149/2); National Institute for Health Research (NIHR202639); British Council (UCL-NMU-SEU International Collaboration On Artificial Intelligence In Medicine: Tackling Challenges Of Low Generalisability And Health Inequality); Welcome Trust ITPA (PIII0054/005); The Alan Turing Institute, London, UK. H.W. is the corresponding author of this paper - based at UCL, Gower St, London, WC1E 6BT and contactable via email: [email protected].

Data Availability Statement

The QEUH training data that support the findings of this study may be available on request from the data controller and co-author, Malcolm Sim. The data are not publicly available as individual level healthcare data are protected by privacy laws. The HiRID and MIMIC-III are publicly accessible at the following URLs:

1. MIMIC-III database: https://mimic.mit.edu/docs/gettingstarted/.

2. HiRID database: https://www.physionet.org/content/hirid/1.1.1/.

Code availability
For reproducibility, all dataset pre-processing and machine learning model codes for this study is accessible here: https://github.com/aneeta-sylo/npjDigitalMedicine. The external validation datasets and machine learning models were constructed using Python 3.6.

Fingerprint

Dive into the research topics of 'The impact of inconsistent human annotations on AI driven clinical decision making'. Together they form a unique fingerprint.

Cite this