Missing Information, Unresponsive Authors, Experimental Flaws: The Impossibility of Assessing the Reproducibility of Previous Human Evaluations in NLP

Anya Belz, Craig Thomson, Ehud Reiter, Gavin Abercrombie, Jose M. Alonso Moral, Mohammad Arvan, Jackie Cheung, Mark Cieliebak, Elizabeth Clark, Kees van Deemter, Tanvi Dinkar, Ondřej Dušek, Steffen Eger, Qixiang Fang, Albert Gatt, Dimitra Gkatzia, Javier González Corbelle, Dirk Hovy, Manuela Hürlimann, Takumi ItoJohn D. Kelleher, Filip Klubicka, Huiyuan Lai, Chris van der Lee, Emiel van Miltenburg, Yiru Li, Saad Mahamood, Margot Mieskes, Malvina Nissim, Natalie Parde, Ondrej Plátek, Verena Rieser, Pablo Mosteiro Romero, Joel Tetreault, Antonio Toral, Xiaojun Wang, Leo Wanner, Lewis Watson, Diyi Yang

Research output: Chapter in Book/Report/Conference proceedingPublished conference contribution

12 Citations (Scopus)

Abstract

We report our efforts in identifying a set of previous human evaluations in NLP that would be suitable for a coordinated study examining what makes human evaluations in NLP more/less reproducible. We present our results and findings, which include that just 13% of papers had (i) sufficiently low barriers to reproduction, and (ii) enough obtainable information, to be considered for reproduction, and that all but one of the experiments we selected for reproduction was discovered to have flaws that made the meaningfulness of conducting a reproduction questionable. As a result, we had to change our coordinated study design from a reproduce approach to a standardise-then-reproduce-twice approach. Our overall (negative) finding that the great majority of human evaluations in NLP is not repeatable and/or not reproducible and/or too flawed to justify reproduction, paints a dire picture, but presents an opportunity for a rethink about how to design and report human evaluations in NLP.
Original languageEnglish
Title of host publicationThe Fourth Workshop on Insights from Negative Results in NLP
EditorsShabnam Tafreshi, Arjun Akula, João Sedoc, Aleksandr Drozd, Anna Rogers, Anna Rumshisky
Place of PublicationDubrovnik, Croatia
PublisherAssociation for Computational Linguistics
Pages1-10
Number of pages10
DOIs
Publication statusPublished - 1 May 2023
EventInsights 2023 : The Forth Workshop on Insights from Negative Results in NLP - Dubrovnik, Croatia
Duration: 2 Jun 20236 Jun 2023

Workshop

WorkshopInsights 2023 : The Forth Workshop on Insights from Negative Results in NLP
Country/TerritoryCroatia
CityDubrovnik
Period2/06/236/06/23

Fingerprint

Dive into the research topics of 'Missing Information, Unresponsive Authors, Experimental Flaws: The Impossibility of Assessing the Reproducibility of Previous Human Evaluations in NLP'. Together they form a unique fingerprint.

Cite this