Towards Explainable Evaluation Metrics for Machine Translation

Christoph Leiter, Piyawat Lertvittayakumjorn, Marina Fomicheva, Wei Zhao, Yang Gao, Steffen Eger

Research output: Contribution to journalArticlepeer-review

Abstract

Unlike classical lexical overlap metrics such as BLEU, most current evaluation metrics for machine translation (for example, COMET or BERTScore) are based on black-box large language models. They often achieve strong correlations with human judgments, but recent research indicates that the lower-quality classical metrics remain dominant, one of the potential reasons being that their decision processes are more transparent. To foster more widespread acceptance of novel high-quality metrics, explainability thus becomes crucial. In this concept paper, we identify key properties as well as key goals of explainable machine translation metrics and provide a comprehensive synthesis of recent techniques, relating them to our established goals and properties. In this context, we also discuss the latest state-of-the-art approaches to explainable metrics based on generative models such as ChatGPT and GPT4. Finally, we contribute a vision of next-generation approaches, including natural language explanations. We hope that our work can help catalyze and guide future research on explainable evaluation metrics and, mediately, also contribute to better and more transparent machine translation systems.
Original languageEnglish
Pages (from-to)1-49
Number of pages49
JournalJournal of Machine Learning Research
Volume25
Issue number75
Early online date1 Mar 2024
Publication statusPublished - 1 Mar 2024

Bibliographical note

Acknowledgments and Disclosure of Funding
Since November 2022 Christoph Leiter is financed by the BMBF project “Metrics4NLG”. Piyawat Lertvittayakumjorn had been financially supported by Anandamahidol Foundation, Thailand, from 2015-2021. He mainly contributed to this work only until September 2022 while affiliated with Imperial College London (before joining Google as a research scientist after that). Marina Fomicheva mainly contributed to this work until April 2022. Wei Zhao was supported by the Klaus Tschira Foundation and Young Marsilius Fellowship, Heidelberg, until December 2023. Yang Gao mainly contributed to this work before he joined Google Research in December 2021. Steffen Eger is financed by DFG Heisenberg grant EG 375/5–1 and by the BMBF propject “Metrics4NLG”.

Keywords

  • evaluation metrics
  • explainability
  • interpretability
  • machine translation
  • machine translation evaluation

Fingerprint

Dive into the research topics of 'Towards Explainable Evaluation Metrics for Machine Translation'. Together they form a unique fingerprint.

Cite this