A survey of experts to identify methods to detect problematic studies: Stage 1 of the INSPECT-SR Project

Jack Wilkinson* (Corresponding Author), Calvin Heal, George A Antoniou, Ella Flemyng, Alison Avenell, Virginia Barbour, Esmee Bordewijk, Nicholas J L Brown, Mike Clarke, Jo C. Dumville, Steph Grohmann, Lyle C. Gurrin, Jill A Hayden, Kylie E Hunter, Emily Lam, Toby Lasserson, Tianjing Li, Sarah F Lensen, Jianping Liu, Andreas LundhGideon Meyerowitz-Katz, Ben W. Mol, Neil E O’Connell, Lisa Parker, Barbara Redman, Anna Lene Seidler, Kyle A Sheldrick, Emma Sydenham, Darren L Dahly, Madelon van Wely, Lisa Bero, Jamie Kirkham

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

1 Citation (Scopus)

Abstract

Background
Randomised controlled trials (RCTs) inform healthcare decisions. Unfortunately, some published RCTs contain false data, and some appear to have been entirely fabricated. Systematic reviews are performed to identify and synthesise all RCTs which have been conducted on a given topic. This means that any of these ‘problematic studies’ are likely to be included, but there are no agreed methods for identifying them. The INSPECT-SR project is developing a tool to identify problematic RCTs in systematic reviews of healthcare-related interventions. The tool will guide the user through a series of ‘checks’ to determine a study’s authenticity. The first objective in the development process is to assemble a comprehensive list of checks to consider for inclusion.
Methods
We assembled an initial list of checks for assessing the authenticity of research studies, with no restriction to RCTs, and categorised these into five domains: Inspecting results in the paper; Inspecting the research team; Inspecting conduct, governance, and transparency; Inspecting text and publication details; Inspecting the individual participant data. We implemented this list as an online survey, and invited people with expertise and experience of assessing potentially problematic studies to participate through professional networks and online forums. Participants were invited to provide feedback on the checks on the list, and were asked to describe any additional checks they knew of, which were not featured in the list.
Results
Extensive feedback on an initial list of 102 checks was provided by 71 participants based in 16 countries across five continents. Fourteen new checks were proposed across the five domains, and suggestions were made to reword checks on the initial list. An updated list of
checks was constructed, comprising 116 checks. Many participants expressed a lack of familiarity with statistical checks, and emphasized the importance of feasibility of the tool.
Conclusions
A comprehensive list of trustworthiness checks has been produced. The checks will be evaluated to determine which should be included in the INSPECT-SR tool.
Original languageEnglish
Article number111512
Number of pages10
JournalJournal of Clinical Epidemiology
Volume175
Early online date28 Sept 2024
DOIs
Publication statusPublished - Nov 2024

Bibliographical note

The authors would like to thank Richard Stevens for helpful comments during the planning of this study

Data Availability Statement

The study dataset is available at https://osf.io/6pmx5/.

Keywords

  • Research integrity
  • Fraud
  • Fabrication
  • Misconduct
  • Trustworthiness
  • Randomised controlled trials
  • Systematic reviews
  • Forensic analysis
  • Evidence synthesis
  • Critical appraisal

Fingerprint

Dive into the research topics of 'A survey of experts to identify methods to detect problematic studies: Stage 1 of the INSPECT-SR Project'. Together they form a unique fingerprint.

Cite this