View a PDF of the paper titled LLMs May Perform MCQA by Selecting the Least Incorrect Option, by Haochun Wang and 5 other authors
Abstract:In the field of NLP, Large Language Models (LLMs) have markedly enhanced performance across a variety of tasks. However, the comprehensive evaluation of LLMs remains an inevitable challenge for the community. Recently, the adoption of Multiple Choice Question Answering (MCQA) as a benchmark for assessing LLMs has gained considerable traction. However, concerns regarding the robustness of this evaluative method persist. Building upon previous discussions on the issue of textit{variability}, we reveal an additional dimension of concern: LLMs may perform MCQA by selecting the least incorrect option rather than distinctly correct. This observation suggests that LLMs might regard multiple options as correct, which could undermine the reliability of MCQA as a metric for evaluating LLMs. To address this challenge, we introduce an enhanced dataset augmentation method for MCQA, termed MCQA+, to provide a more accurate reflection of the model performance, thereby highlighting the necessity for more sophisticated evaluation mechanisms in the assessment of LLM capabilities.
Submission history
From: Haochun Wang [view email]
[v1]
Fri, 2 Feb 2024 12:07:00 UTC (289 KB)
[v2]
Thu, 30 May 2024 01:57:14 UTC (410 KB)
[v3]
Fri, 6 Dec 2024 11:54:40 UTC (361 KB)
Source link
lol