Abstract
We evaluate the robustness of several large language models on multiple datasets. Robustness here refers to the relative insensitivity of the model's answers to meaning-preserving variants of their input. Benchmark datasets are constructed by introducing naturally-occurring, non-malicious perturbations, or by generating semantically equivalent paraphrases of input questions or statements. We further propose a novel metric for assessing a model robustness, and demonstrate its benefits in the non-adversarial scenario by empirical evaluation of several models on the created datasets.
Original language | English |
---|---|
Title of host publication | EMNLP 2024 - 2024 Conference on Empirical Methods in Natural Language Processing, Findings of EMNLP 2024 |
Editors | Yaser Al-Onaizan, Mohit Bansal, Yun-Nung Chen |
Publisher | Association for Computational Linguistics (ACL) |
Pages | 2794-2802 |
Number of pages | 9 |
ISBN (Electronic) | 9798891761681 |
State | Published - 2024 |
Externally published | Yes |
Event | 2024 Conference on Empirical Methods in Natural Language Processing, EMNLP 2024 - Hybrid, Miami, United States Duration: 12 Nov 2024 → 16 Nov 2024 |
Publication series
Name | EMNLP 2024 - 2024 Conference on Empirical Methods in Natural Language Processing, Findings of EMNLP 2024 |
---|
Conference
Conference | 2024 Conference on Empirical Methods in Natural Language Processing, EMNLP 2024 |
---|---|
Country/Territory | United States |
City | Hybrid, Miami |
Period | 12/11/24 → 16/11/24 |
Bibliographical note
Publisher Copyright:© 2024 Association for Computational Linguistics.
ASJC Scopus subject areas
- Computational Theory and Mathematics
- Computer Science Applications
- Information Systems
- Linguistics and Language