Quadas-c: A tool for assessing risk of bias in comparative diagnostic accuracy studies

Bada Yang* (Corresponding Author), Sue Mallett, Yemisi Takwoingi, Clare F. Davenport, Christopher J. Hyde, Penny F. Whiting, Jonathan J. Deeks, Mariska M.G. Leeflang, QUADAS-C Group , Miriam Brazzelli

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

91 Citations (Scopus)

Abstract

Comparative diagnostic test accuracy studies assess and compare the accuracy of 2 ormore tests in the same study. Although these studies have the potential to yield reliable evidence regarding comparative accuracy, shortcomings in the design, conduct, and analysis may bias their results. The currently recommended quality assessment tool for diagnostic test accuracy studies, QUADAS-2 (Quality Assessment of Diagnostic Accuracy Studies- 2), is not designed for the assessment of test comparisons. The QUADAS-C (Quality Assessment of Diagnostic Accuracy Studies- Comparative) tool was developed as an extension of QUADAS-2 to assess the risk of bias in comparative diagnostic test accuracy studies. Through a 4-round Delphi study involving 24 international experts in test evaluation and a face-to-face consensus meeting, an initial version of the tool was developed that was revised and finalized following a pilot study among potential users. The QUADAS-C tool retains the same 4-domain structure of QUADAS-2 (Patient Selection, Index Test, Reference Standard, and Flow and Timing) and comprises additional questions to each QUADAS-2 domain. A risk-of-bias judgment for comparative accuracy requires a risk-of-bias judgment for the accuracy of each test (resulting from QUADAS-2) and additional criteria specific to test comparisons. Examples of such additional criteria include whether participants either received all index tests or were randomly assigned to index tests, and whether index tests were interpreted with blinding to the results of other index tests. The QUADAS-C tool will be useful for systematic reviews of diagnostic test accuracy addressing comparative questions. Furthermore, researchers may use this tool to identify and avoid risk of bias when designing a comparative diagnostic test accuracy study.

Original languageEnglish
Pages (from-to)1592-1599
Number of pages8
JournalAnnals of Internal Medicine
Volume174
Issue number11
Early online date26 Oct 2021
DOIs
Publication statusPublished - 1 Nov 2021

Bibliographical note

Funding Information:
Grant Support: By an NIHR Postdoctoral Fellowship (Dr. Takwoingi) and by the NIHR Birmingham Biomedical Research Centre (Drs. Takwoingi, Deeks, and Davenport). This article presents independent research supported by the NIHR Birmingham Biomedical Research Centre at the University Hospitals Birmingham NHS Foundation Trust and the University of Birmingham.

Publisher Copyright:
© 2021 American College of Physicians. All rights reserved.

Members of the QUADAS-C Group
Patrick M.M. Bossuyt, PhD† (University of Amsterdam).
Miriam G. Brazzelli, PhD† (University of Aberdeen).
Clare F. Davenport, PhD*† (University of Birmingham). Other contributions: conceptualization, methodology, formal analysis, writing – review and editing.
Jonathan J. Deeks, PhD*† (University of Birmingham). Other contributions: conceptualization, methodology, formal analysis, writing – review and editing.
Jacqueline Dinnes, PhD† (University of Birmingham).
Kurinchi S. Gurusamy, MBBS, PhD† (University College London).
Hayley E. Jones, PhD† (University of Bristol).
Christopher J. Hyde, MD*† (University of Exeter). Other contributions: conceptualization, methodology, formal analysis, writing – review and editing.
Stefan Lange, MD† (Institute for Quality and Efficiency in Health Care).
Miranda W. Langendam, PhD† (University of Amsterdam).
Mariska M.G. Leeflang, DVM, PhD*† (University of Amsterdam). Other contributions: conceptualization, project administration, methodology, formal analysis, writing – review and editing, supervision.
Petra Macaskill, PhD† (The University of Sydney).
Sue Mallett, DPhil*† (University College London). Other contributions: conceptualization, methodology, formal analysis, writing – review and editing.
Matthew D.F. McInnes, MD, PhD† (University of Ottawa).
Johannes B. Reitsma, MD, PhD† (Utrecht University).
Anne W.S. Rutjes, PhD† (University of Bern).
Alison Sinclair, MD, PhD† (Canadian Agency for Drugs and Technologies in Health).
Yemisi Takwoingi, DVM, PhD*† (University of Birmingham). Other contributions: conceptualization, methodology, formal analysis, writing – review and editing.
Henrica C.W. de Vet, PhD† (Vrije Universiteit Amsterdam).
Gianni Virgili, MD† (Queen's University Belfast).
Ros Wade, MSc† (University of York).
Marie E. Westwood, PhD† (Kleijnen Systematic Reviews).
Penny F. Whiting, PhD*† (University of Bristol). Other contributions: conceptualization, methodology, formal analysis, writing – review and editing.
Bada Yang, MD*† (University of Amsterdam). Other contributions: conceptualization, project administration, methodology, data collection, formal analysis, writing – original draft, writing – review and editing.

* Steering group members.
† Members who authored this work.

Data Availability Statement

Reproducible Research Statement
Study protocol: Protocol of the Delphi study is available at https://osf.io/tmze9; protocol of the pilot study is available at https://osf.io/agx3z. Statistical code: Not applicable. Data set: Not available.

Keywords

  • Comparative effectiveness research
  • Epidemiology
  • Grading of Recommendations Assessment Development and Evaluation
  • Systematic reviews
  • Research quality assessment

Fingerprint

Dive into the research topics of 'Quadas-c: A tool for assessing risk of bias in comparative diagnostic accuracy studies'. Together they form a unique fingerprint.

Cite this