View From Above: A Framework for Evaluating Distribution Shifts in Model Behavior

AmazUtah_NLP at SemEval-2024 Task 9: A MultiChoice Question Answering System for Commonsense Defying Reasoning


View a PDF of the paper titled View From Above: A Framework for Evaluating Distribution Shifts in Model Behavior, by Tanush Chopra and 2 other authors

View PDF
HTML (experimental)

Abstract:When large language models (LLMs) are asked to perform certain tasks, how can we be sure that their learned representations align with reality? We propose a domain-agnostic framework for systematically evaluating distribution shifts in LLMs decision-making processes, where they are given control of mechanisms governed by pre-defined rules. While individual LLM actions may appear consistent with expected behavior, across a large number of trials, statistically significant distribution shifts can emerge. To test this, we construct a well-defined environment with known outcome logic: blackjack. In more than 1,000 trials, we uncover statistically significant evidence suggesting behavioral misalignment in the learned representations of LLM.

Submission history

From: Tanush Chopra [view email]
[v1]
Mon, 1 Jul 2024 04:07:49 UTC (307 KB)
[v2]
Thu, 26 Sep 2024 00:24:25 UTC (1,163 KB)



Source link
lol

By stp2y

Leave a Reply

Your email address will not be published. Required fields are marked *

No widgets found. Go to Widget page and add the widget in Offcanvas Sidebar Widget Area.