View a PDF of the paper titled Is ChatGPT Good at Search? Investigating Large Language Models as Re-Ranking Agents, by Weiwei Sun and 7 other authors
Abstract:Large Language Models (LLMs) have demonstrated remarkable zero-shot generalization across various language-related tasks, including search engines. However, existing work utilizes the generative ability of LLMs for Information Retrieval (IR) rather than direct passage ranking. The discrepancy between the pre-training objectives of LLMs and the ranking objective poses another challenge. In this paper, we first investigate generative LLMs such as ChatGPT and GPT-4 for relevance ranking in IR. Surprisingly, our experiments reveal that properly instructed LLMs can deliver competitive, even superior results to state-of-the-art supervised methods on popular IR benchmarks. Furthermore, to address concerns about data contamination of LLMs, we collect a new test set called NovelEval, based on the latest knowledge and aiming to verify the model’s ability to rank unknown knowledge. Finally, to improve efficiency in real-world applications, we delve into the potential for distilling the ranking capabilities of ChatGPT into small specialized models using a permutation distillation scheme. Our evaluation results turn out that a distilled 440M model outperforms a 3B supervised model on the BEIR benchmark. The code to reproduce our results is available at this http URL.
Submission history
From: Weiwei Sun [view email]
[v1]
Wed, 19 Apr 2023 10:16:03 UTC (531 KB)
[v2]
Fri, 27 Oct 2023 12:11:16 UTC (835 KB)
[v3]
Sat, 28 Dec 2024 06:20:54 UTC (835 KB)
Source link
lol