Self-Evaluation of Large Language Model based on Glass-box Features

AmazUtah_NLP at SemEval-2024 Task 9: A MultiChoice Question Answering System for Commonsense Defying Reasoning


View a PDF of the paper titled Self-Evaluation of Large Language Model based on Glass-box Features, by Hui Huang and 6 other authors

View PDF

Abstract:The proliferation of open-source Large Language Models (LLMs) underscores the pressing need for evaluation methods. Existing works primarily rely on external evaluators, focusing on training and prompting strategies. However, a crucial aspect, model-aware glass-box features, is overlooked. In this study, we explore the utility of glass-box features under the scenario of self-evaluation, namely applying an LLM to evaluate its own output. We investigate various glass-box feature groups and discovered that the softmax distribution serves as a reliable quality indicator for self-evaluation. Experimental results on public benchmarks validate the feasibility of self-evaluation of LLMs using glass-box features.

Submission history

From: Hui Huang Mr. [view email]
[v1]
Thu, 7 Mar 2024 04:50:38 UTC (1,961 KB)
[v2]
Fri, 27 Sep 2024 07:08:10 UTC (2,175 KB)



Source link
lol

By stp2y

Leave a Reply

Your email address will not be published. Required fields are marked *

No widgets found. Go to Widget page and add the widget in Offcanvas Sidebar Widget Area.