View a PDF of the paper titled Mitigating Knowledge Conflicts in Language Model-Driven Question Answering, by Han Cao and 5 other authors
Abstract:In the context of knowledge-driven seq-to-seq generation tasks, such as document-based question answering and document summarization systems, two fundamental knowledge sources play crucial roles: the inherent knowledge embedded within model parameters and the external knowledge obtained through context. Recent studies revealed a significant challenge: when there exists a misalignment between the model’s inherent knowledge and the ground truth answers in training data, the system may exhibit problematic behaviors during inference, such as ignoring input context, or generating unfaithful content. Our investigation proposes a strategy to minimize hallucination by building explicit connection between source inputs and generated outputs. We specifically target a common hallucination pattern in question answering, examining how the correspondence between entities and their contexts during model training influences the system’s performance at inference time.
Submission history
From: Han Cao [view email]
[v1]
Mon, 18 Nov 2024 07:33:10 UTC (91 KB)
[v2]
Sat, 4 Jan 2025 09:16:31 UTC (505 KB)
[v3]
Wed, 15 Jan 2025 07:46:15 UTC (508 KB)
Source link
lol