View a PDF of the paper titled Corrective Retrieval Augmented Generation, by Shi-Qi Yan and 3 other authors
Abstract:Large language models (LLMs) inevitably exhibit hallucinations since the accuracy of generated texts cannot be secured solely by the parametric knowledge they encapsulate. Although retrieval-augmented generation (RAG) is a practicable complement to LLMs, it relies heavily on the relevance of retrieved documents, raising concerns about how the model behaves if retrieval goes wrong. To this end, we propose the Corrective Retrieval Augmented Generation (CRAG) to improve the robustness of generation. Specifically, a lightweight retrieval evaluator is designed to assess the overall quality of retrieved documents for a query, returning a confidence degree based on which different knowledge retrieval actions can be triggered. Since retrieval from static and limited corpora can only return sub-optimal documents, large-scale web searches are utilized as an extension for augmenting the retrieval results. Besides, a decompose-then-recompose algorithm is designed for retrieved documents to selectively focus on key information and filter out irrelevant information in them. CRAG is plug-and-play and can be seamlessly coupled with various RAG-based approaches. Experiments on four datasets covering short- and long-form generation tasks show that CRAG can significantly improve the performance of RAG-based approaches.
Submission history
From: Jia-Chen Gu [view email]
[v1]
Mon, 29 Jan 2024 04:36:39 UTC (315 KB)
[v2]
Fri, 16 Feb 2024 19:10:36 UTC (319 KB)
[v3]
Mon, 7 Oct 2024 02:19:21 UTC (322 KB)
Source link
lol