View a PDF of the paper titled On the Adversarial Vulnerability of Pairwise Evaluation Using Large Language Models, by Hawon Jeong and 3 other authors
Abstract:Pairwise evaluation using large language models (LLMs) is widely adopted for evaluating generated outputs. However, the reliability of LLM evaluators is often compromised by their biased preferences, such as favoring verbosity and an authoritative tone. In this work, we find that the evaluation setup itself can significantly amplify these biases, where pairwise evaluators exhibit more undesirable tendencies than pointwise evaluators. Our analysis further reveals that even when pairwise evaluators make incorrect judgments, they can still accurately identify shortcomings in low-quality outputs. As a simple remedy, we also propose incorporating pointwise reasoning into pairwise evaluation. Experimental results show that our method improves the performance of pairwise evaluators on adversarial samples across various models. We hope our findings encourage further exploration into the reliability of LLM evaluators.
Submission history
From: ChaeHun Park [view email]
[v1]
Tue, 18 Jun 2024 06:43:04 UTC (1,345 KB)
[v2]
Thu, 3 Oct 2024 09:38:48 UTC (1,234 KB)
Source link
lol