Benchmarking LLMs via Uncertainty Quantification

AmazUtah_NLP at SemEval-2024 Task 9: A MultiChoice Question Answering System for Commonsense Defying Reasoning


View a PDF of the paper titled Benchmarking LLMs via Uncertainty Quantification, by Fanghua Ye and 7 other authors

View PDF
HTML (experimental)

Abstract:The proliferation of open-source Large Language Models (LLMs) from various institutions has highlighted the urgent need for comprehensive evaluation methods. However, current evaluation platforms, such as the widely recognized HuggingFace open LLM leaderboard, neglect a crucial aspect — uncertainty, which is vital for thoroughly assessing LLMs. To bridge this gap, we introduce a new benchmarking approach for LLMs that integrates uncertainty quantification. Our examination involves nine LLMs (LLM series) spanning five representative natural language processing tasks. Our findings reveal that: I) LLMs with higher accuracy may exhibit lower certainty; II) Larger-scale LLMs may display greater uncertainty compared to their smaller counterparts; and III) Instruction-finetuning tends to increase the uncertainty of LLMs. These results underscore the significance of incorporating uncertainty in the evaluation of LLMs.

Submission history

From: Fanghua Ye [view email]
[v1]
Tue, 23 Jan 2024 14:29:17 UTC (8,951 KB)
[v2]
Thu, 25 Apr 2024 14:00:01 UTC (8,980 KB)
[v3]
Thu, 31 Oct 2024 16:58:51 UTC (8,958 KB)



Source link
lol

By stp2y

Leave a Reply

Your email address will not be published. Required fields are marked *

No widgets found. Go to Widget page and add the widget in Offcanvas Sidebar Widget Area.