Testing Uncertainty of Large Language Models for Physics Knowledge and Reasoning

AmazUtah_NLP at SemEval-2024 Task 9: A MultiChoice Question Answering System for Commonsense Defying Reasoning


[Submitted on 18 Nov 2024]

View a PDF of the paper titled Testing Uncertainty of Large Language Models for Physics Knowledge and Reasoning, by Elizaveta Reganova and 1 other authors

View PDF
HTML (experimental)

Abstract:Large Language Models (LLMs) have gained significant popularity in recent years for their ability to answer questions in various fields. However, these models have a tendency to “hallucinate” their responses, making it challenging to evaluate their performance. A major challenge is determining how to assess the certainty of a model’s predictions and how it correlates with accuracy. In this work, we introduce an analysis for evaluating the performance of popular open-source LLMs, as well as gpt-3.5 Turbo, on multiple choice physics questionnaires. We focus on the relationship between answer accuracy and variability in topics related to physics. Our findings suggest that most models provide accurate replies in cases where they are certain, but this is by far not a general behavior. The relationship between accuracy and uncertainty exposes a broad horizontal bell-shaped distribution. We report how the asymmetry between accuracy and uncertainty intensifies as the questions demand more logical reasoning of the LLM agent, while the same relationship remains sharp for knowledge retrieval tasks.

Submission history

From: Elizaveta Reganova [view email]
[v1]
Mon, 18 Nov 2024 13:42:13 UTC (1,338 KB)



Source link
lol

By stp2y

Leave a Reply

Your email address will not be published. Required fields are marked *

No widgets found. Go to Widget page and add the widget in Offcanvas Sidebar Widget Area.