View a PDF of the paper titled DIRAS: Efficient LLM Annotation of Document Relevance in Retrieval Augmented Generation, by Jingwei Ni and 5 other authors
Abstract:Retrieval Augmented Generation (RAG) is widely employed to ground responses to queries on domain-specific documents. But do RAG implementations leave out important information when answering queries that need an integrated analysis of information (e.g., Tell me good news in the stock market today.)? To address these concerns, RAG developers need to annotate information retrieval (IR) data for their domain of interest, which is challenging because (1) domain-specific queries usually need nuanced definitions of relevance beyond shallow semantic relevance; and (2) human or GPT-4 annotation is costly and cannot cover all (query, document) pairs (i.e., annotation selection bias), thus harming the effectiveness in evaluating IR recall. To address these challenges, we propose DIRAS (Domain-specific Information Retrieval Annotation with Scalability), a manual-annotation-free schema that fine-tunes open-sourced LLMs to consider nuanced relevance definition and annotate (partial) relevance labels with calibrated relevance scores. Extensive evaluation shows that DIRAS enables smaller (8B) LLMs to achieve GPT-4-level performance on annotating and ranking unseen (query, document) pairs, and is helpful for real-world RAG development. All code, LLM generations, and human annotations can be found in url{this https URL}.
Submission history
From: Jingwei Ni [view email]
[v1]
Thu, 20 Jun 2024 10:04:09 UTC (9,437 KB)
[v2]
Tue, 15 Oct 2024 11:37:04 UTC (9,790 KB)
[v3]
Wed, 16 Oct 2024 13:16:25 UTC (9,790 KB)
[v4]
Thu, 23 Jan 2025 08:41:05 UTC (9,794 KB)
Source link
lol