A Structured Review of the Validity of BLEU

Research output: Contribution to journalArticlepeer-review

165 Citations (Scopus)
15 Downloads (Pure)

Abstract

The BLEU metric has been widely used in NLP for over 15 years to evaluate NLP systems, especially in machine translation and natural language generation. I present a structured review of the evidence on whether BLEU is a valid evaluation technique, in other words whether BLEU scores correlate with real-world utility and user-satisfaction of NLP systems; this review covers 284 correlations reported in 34 papers. Overall, the evidence supports using BLEU for diagnostic evaluation of MT systems (which is what it was originally proposed for), but does not support using BLEU outwith MT, for evaluation of individual texts, or for scientific hypothesis testing.
Original languageEnglish
Pages (from-to)393-401
Number of pages9
JournalComputational Linguistics
Volume44
Issue number3
Early online date21 Sept 2018
DOIs
Publication statusPublished - Sept 2018

Fingerprint

Dive into the research topics of 'A Structured Review of the Validity of BLEU'. Together they form a unique fingerprint.

Cite this