Abstract
Most Natural Language Generation systems need to produce accurate texts. We propose a methodology for high-quality human evaluation of the accuracy of generated texts, which is intended to serve as a gold-standard for accuracy evaluations of data-to-text systems. We use our methodology to evaluate the accuracy of computer generated basketball summaries. We then show how our gold standard evaluation can be used to validate automated metrics.
Original language | English |
---|---|
Pages | 158-168 |
Number of pages | 11 |
Publication status | Published - Dec 2020 |
Event | Proceedings of the 13th International Conference on Natural Language Generation - Held online Dublin City University, Dublin, Ireland Duration: 15 Dec 2020 → 18 Dec 2020 Conference number: 13 https://www.inlg2020.org/ |
Conference
Conference | Proceedings of the 13th International Conference on Natural Language Generation |
---|---|
Abbreviated title | INLG 2020 |
Country/Territory | Ireland |
City | Dublin |
Period | 15/12/20 → 18/12/20 |
Internet address |
Bibliographical note
Acknowledgements:Many thanks to the Mechanical Turk annotators who participated in our experiment, and also to David Reiter, Tim Daniels, Rodrigo de Oliveira, and Andrew Smith for serving as pilot annotators when we were developing the methodology described in this paper. We would also like to thank Moray Greig for being our basketball domain expert during development. We are also grateful for the very helpful comments on this paper from the anonymous reviewers, the Aberdeen CLAN group, David Howcroft, Clement Rebuffel, and Chris van ´ der Lee. We would also like to thank Sam Wiseman, Ratish Puduppully, and Clement Rebuffel for pro- viding the generated texts from their respective systems. The work presented here is partially funded by the Engineering and Physical Sciences Research Council (EPSRC), which funds Craig Thomson under a National Productivity Investment Fund Doctoral Studentship (EP/R512412/1).