Acceptability of artificial intelligence in breast screening: Focus groups with the screening-eligible population in England: Acceptability of AI in Breast Screening

Lauren Gatting* (Corresponding Author), Syeda Ahmed, Priscilla Meccheri, Rumana Newlands, Angie Kehagia, Jo Waller

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

Abstract

Introduction Preliminary studies of artificial intelligence (AI) tools developed to support breast screening demonstrate the potential to reduce radiologist burden and improve cancer detection which could lead to improved breast cancer outcomes. This study explores the public acceptability of the use of AI in breast screening from the perspective of screening-eligible women in England.

Methods 64 women in England, aged 50–70 years (eligible for breast screening) and 45–49 years (approaching eligibility), participated in 12 focus groups—8 online and 4 in person. Specific scenarios in which AI may be used in the mammogram reading process were presented. Data were analysed using a reflexive thematic analysis.

Results Four themes described public perceptions of AI in breast screening found in this study: (1) Things going wrong and being missed summarises a predominant and pervasive concern about an AI tool being used in breast screening; (2) Speed of change and loss of control captures a positive association of AI with technological advances held by the women but also feelings of things being out of their control, and that they were being left behind and in the dark; (3) The importance of humans reports concern around the possibility that AI excludes humans and renders them redundant and (4) Desire for thorough research, staggered implementation and double-checking of scans included insistence that any AI be thoroughly trialled, tested and not solely relied on when initially implemented.

Conclusions It will be essential that future decision-making and communication about AI implementation in breast screening (and, likely, in healthcare more widely) address concerns surrounding (1) the fallibility of AI, (2) lack of inclusion, control and transparency in relation to healthcare and technology decisions and (3) humans being left redundant and unneeded, while building on women’s hopes for the technology.
Original languageEnglish
Article numbere000892
Number of pages11
JournalBMJ Public Health
Volume2
Issue number2
Early online date12 Dec 2024
DOIs
Publication statusPublished - Dec 2024

Data Availability Statement

No data are available. The dataset generated and analysed during the current study is not publicly available due to work ongoing. The research materials are available on https://osf.io/gqtcr, and future data availability details will be shared here.

Funding

This study was funded by an Artificial Intelligence in Health and Care Award (grant/award number: not applicable), one of the NHS AI Lab programmes, to King’s Technology Evaluation Centre. The competitive award scheme is run by the Department of Health and Social Care. The AI Award has made funding available to accelerate the testing and evaluation of artificial intelligence technologies which meet the aims set out in the NHS Long Term Plan.

FundersFunder number
Department of Health and Social Care

    Fingerprint

    Dive into the research topics of 'Acceptability of artificial intelligence in breast screening: Focus groups with the screening-eligible population in England: Acceptability of AI in Breast Screening'. Together they form a unique fingerprint.

    Cite this