Abstract
Recent large language models (LLM) are leveraging human feedback to improve their generation quality. However, human feedback is costly to obtain, especially during inference. In this work, we propose LLMRefine, an inference time optimization method to refine LLM's output. The core idea is to use a learned fine-grained feedback model to pinpoint defects and guide LLM to refine them iteratively. Using original LLM as a proposal of edits, LLMRefine searches for defect-less text via simulated annealing, trading off the exploration and exploitation. We conduct experiments on three text generation tasks, including machine translation, long-form question answering (QA), and topical summarization. LLMRefine consistently outperforms all baseline approaches, achieving improvements up to 1.7 MetricX points on translation tasks, 8.1 ROUGE-L on ASQA, 2.2 ROUGE-L on topical summarization.
Original language | English |
---|---|
Title of host publication | Findings of the Association for Computational Linguistics |
Subtitle of host publication | NAACL 2024 - Findings |
Editors | Kevin Duh, Helena Gomez, Steven Bethard |
Publisher | Association for Computational Linguistics (ACL) |
Pages | 1429-1445 |
Number of pages | 17 |
ISBN (Electronic) | 9798891761193 |
DOIs | |
State | Published - 2024 |
Externally published | Yes |
Event | 2024 Findings of the Association for Computational Linguistics: NAACL 2024 - Mexico City, Mexico Duration: 16 Jun 2024 → 21 Jun 2024 |
Publication series
Name | Findings of the Association for Computational Linguistics: NAACL 2024 - Findings |
---|
Conference
Conference | 2024 Findings of the Association for Computational Linguistics: NAACL 2024 |
---|---|
Country/Territory | Mexico |
City | Mexico City |
Period | 16/06/24 → 21/06/24 |
Bibliographical note
Publisher Copyright:© 2024 Association for Computational Linguistics.
ASJC Scopus subject areas
- Computational Theory and Mathematics
- Software