Counterfactual Debating with Preset Stances for Hallucination Elimination of LLMs

Architecture of OpenAI


View a PDF of the paper titled Counterfactual Debating with Preset Stances for Hallucination Elimination of LLMs, by Yi Fang and 4 other authors

View PDF
HTML (experimental)

Abstract:Large Language Models (LLMs) excel in various natural language processing tasks but struggle with hallucination issues. Existing solutions have considered utilizing LLMs’ inherent reasoning abilities to alleviate hallucination, such as self-correction and diverse sampling methods. However, these methods often overtrust LLMs’ initial answers due to inherent biases. The key to alleviating this issue lies in overriding LLMs’ inherent biases for answer inspection. To this end, we propose a CounterFactual Multi-Agent Debate (CFMAD) framework. CFMAD presets the stances of LLMs to override their inherent biases by compelling LLMs to generate justifications for a predetermined answer’s correctness. The LLMs with different predetermined stances are engaged with a skeptical critic for counterfactual debate on the rationality of generated justifications. Finally, the debate process is evaluated by a third-party judge to determine the final answer. Extensive experiments on four datasets of three tasks demonstrate the superiority of CFMAD over existing methods.

Submission history

From: Yi Fang [view email]
[v1]
Mon, 17 Jun 2024 13:21:23 UTC (795 KB)
[v2]
Wed, 15 Jan 2025 03:20:24 UTC (719 KB)



Source link
lol

By stp2y

Leave a Reply

Your email address will not be published. Required fields are marked *

No widgets found. Go to Widget page and add the widget in Offcanvas Sidebar Widget Area.