Abstract
Evaluations in machine learning rarely use the latest metrics, datasets, or human evaluation in favor of remaining compatible with prior work. The compatibility, often facilitated through leaderboards, thus leads to outdated but standardized evaluation practices. We pose that the standardization is taking place in the wrong spot. Evaluation infrastructure should enable researchers to use the latest methods and what should be standardized instead is how to incorporate these new evaluation advances. We introduce GEMv2, the new version of the Generation, Evaluation, and Metrics Benchmark which uses a modular infrastructure for dataset, model, and metric developers to benefit from each other's work. GEMv2 supports 40 documented datasets in 51 languages, ongoing online evaluation for all datasets, and our interactive tools make it easier to add new datasets to the living benchmark.
| Original language | English |
|---|---|
| Title of host publication | EMNLP 2022 - 2022 Conference on Empirical Methods in Natural Language Processing |
| Subtitle of host publication | Proceedings of the Demonstrations Session |
| Editors | Wanxiang Che, Ekaterina Shutova |
| Publisher | Association for Computational Linguistics (ACL) |
| Pages | 266-281 |
| Number of pages | 16 |
| ISBN (Electronic) | 9781959429418 |
| DOIs | |
| Publication status | Published - 11 Dec 2022 |
| Event | 2022 Conference on Empirical Methods in Natural Language Processing, EMNLP 2022 - Abu Dhabi, United Arab Emirates Duration: 7 Dec 2022 → 11 Dec 2022 |
Conference
| Conference | 2022 Conference on Empirical Methods in Natural Language Processing, EMNLP 2022 |
|---|---|
| Country/Territory | United Arab Emirates |
| City | Abu Dhabi |
| Period | 7/12/22 → 11/12/22 |
Fingerprint
Dive into the research topics of 'GEMv2: Multilingual NLG Benchmarking in a Single Line of Code'. Together they form a unique fingerprint.Cite this
- APA
- Standard
- Harvard
- Vancouver
- Author
- BIBTEX
- RIS