Abstract
Language models are often used as the backbone of modern dialogue systems. These models are pre-trained on large amounts of written fluent language. Repetition is typically penalised when evaluating language model generations. However, it is a key component of dialogue. Humans use local and partner specific repetitions; these are preferred by human users and lead to more successful communication in dialogue. In this study, we evaluate (a) whether language models produce human-like levels of repetition in dialogue, and (b) what are the processing mechanisms related to lexical re-use they use during comprehension. We believe that such joint analysis of model production and comprehension behaviour can inform the development of cognitively inspired dialogue generation systems.
Original language | English |
---|---|
Title of host publication | CoNLL 2023 - 27th Conference on Computational Natural Language Learning, Proceedings |
Editors | Jing Jiang, David Reitter, Shumin Deng |
Publisher | Association for Computational Linguistics (ACL) |
Pages | 254-273 |
Number of pages | 20 |
ISBN (Electronic) | 9798891760394 |
DOIs | |
Publication status | Published - Dec 2023 |
Event | 27th Conference on Computational Natural Language Learning, CoNLL 2023 - Singapore, Singapore Duration: 6 Dec 2023 → 7 Dec 2023 |
Conference
Conference | 27th Conference on Computational Natural Language Learning, CoNLL 2023 |
---|---|
Country/Territory | Singapore |
City | Singapore |
Period | 6/12/23 → 7/12/23 |
Bibliographical note
Funding Information:We would like to thank the anonymous reviewers for their thoughtful and useful reviews and comments. We also wish to thank Ehud Reiter for his useful comments on this work at an early stage. MG is supported by the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement No. 819455).