DecoPrompt : Decoding Prompts Reduces Hallucinations when Large Language Models Meet False Premises

AmazUtah_NLP at SemEval-2024 Task 9: A MultiChoice Question Answering System for Commonsense Defying Reasoning


View a PDF of the paper titled DecoPrompt : Decoding Prompts Reduces Hallucinations when Large Language Models Meet False Premises, by Nan Xu and 1 other authors

View PDF
HTML (experimental)

Abstract:While large language models (LLMs) have demonstrated increasing power, they have also called upon studies on their hallucinated outputs that deviate from factually correct statements. In this paper, we focus on one important scenario of false premises, where LLMs are distracted by misaligned claims although the model possesses the required factual knowledge to answer original questions accurately. Inspired by the observation that entropy of the false-premise prompt is closely related to its likelihood to elicit hallucination generation, we propose a new prompting algorithm, named DecoPrompt, to mitigate hallucination. DecoPrompt leverages LLMs to “decode” the false-premise prompts without really eliciting hallucination output from LLMs. We perform experiments on two datasets, demonstrating that DecoPrompt can reduce hallucinations effectively on outputs from different LLMs. Moreover, DecoPrompt exhibits cross-model transferability, which facilitates its applications to scenarios such as LLMs of large sizes or unavailable model logits.

Submission history

From: Nan Xu [view email]
[v1]
Tue, 12 Nov 2024 00:48:01 UTC (8,342 KB)
[v2]
Tue, 21 Jan 2025 20:24:03 UTC (8,342 KB)



Source link
lol

By stp2y

Leave a Reply

Your email address will not be published. Required fields are marked *

No widgets found. Go to Widget page and add the widget in Offcanvas Sidebar Widget Area.