Abstract
This paper presents the results of the WMT23 Metrics Shared Task. Participants submitting automatic MT evaluation metrics were asked to score the outputs of the translation systems competing in the WMT23 News Translation Task. All metrics were evaluated on how well they correlate with human ratings at the system and segment level. Similar to last year, we acquired our own human ratings based on expert-based human evaluation via Multidimensional Quality Metrics (MQM). Following last year's success, we also included a challenge set subtask, where participants had to create contrastive test suites for evaluating metrics' ability to capture and penalise specific types of translation errors. Furthermore, we improved our meta-evaluation procedure by considering fewer tasks and calculating a global score by weighted averaging across the various tasks.
Original language | English |
---|---|
Title of host publication | Proceedings of the 8th Conference on Machine Translation, WMT 2023 |
Publisher | Association for Computational Linguistics |
Pages | 576-626 |
Number of pages | 51 |
ISBN (Electronic) | 9798891760417 |
State | Published - 2023 |
Externally published | Yes |
Event | 8th Conference on Machine Translation, WMT 2023 - Singapore, Singapore Duration: 6 Dec 2023 → 7 Dec 2023 |
Publication series
Name | Conference on Machine Translation - Proceedings |
---|---|
ISSN (Electronic) | 2768-0983 |
Conference
Conference | 8th Conference on Machine Translation, WMT 2023 |
---|---|
Country/Territory | Singapore |
City | Singapore |
Period | 6/12/23 → 7/12/23 |
Bibliographical note
Publisher Copyright:© 2023 Association for Computational Linguistics.
ASJC Scopus subject areas
- Language and Linguistics
- Human-Computer Interaction
- Software