Abstract
Automatic evaluation of machine translation (MT) is a critical tool driving the rapid iterative development of MT systems. While considerable progress has been made on estimating a single scalar quality score, current metrics lack the informativeness of more detailed schemes that annotate individual errors, such as Multidimensional Quality Metrics (MQM). In this paper, we help fill this gap by proposing AUTOMQM, a prompting technique which leverages the reasoning and in-context learning capabilities of large language models (LLMs) and asks them to identify and categorize errors in translations. We start by evaluating recent LLMs, such as PaLM and PaLM-2, through simple score prediction prompting, and we study the impact of labeled data through in-context learning and finetuning. We then evaluate AUTOMQM with PaLM-2 models, and we find that it improves performance compared to just prompting for scores (with particularly large gains for larger models) while providing interpretability through error spans that align with human annotations.
Original language | English |
---|---|
Title of host publication | Proceedings of the 8th Conference on Machine Translation, WMT 2023 |
Publisher | Association for Computational Linguistics |
Pages | 1064-1081 |
Number of pages | 18 |
ISBN (Electronic) | 9798891760417 |
State | Published - 2023 |
Externally published | Yes |
Event | 8th Conference on Machine Translation, WMT 2023 - Singapore, Singapore Duration: 6 Dec 2023 → 7 Dec 2023 |
Publication series
Name | Conference on Machine Translation - Proceedings |
---|---|
ISSN (Electronic) | 2768-0983 |
Conference
Conference | 8th Conference on Machine Translation, WMT 2023 |
---|---|
Country/Territory | Singapore |
City | Singapore |
Period | 6/12/23 → 7/12/23 |
Bibliographical note
Publisher Copyright:© 2023 Association for Computational Linguistics.
ASJC Scopus subject areas
- Language and Linguistics
- Human-Computer Interaction
- Software