View a PDF of the paper titled Hypothesis Generation with Large Language Models, by Yangqiaoyu Zhou and 4 other authors
Abstract:Effective generation of novel hypotheses is instrumental to scientific progress. So far, researchers have been the main powerhouse behind hypothesis generation by painstaking data analysis and thinking (also known as the Eureka moment). In this paper, we examine the potential of large language models (LLMs) to generate hypotheses. We focus on hypothesis generation based on data (i.e., labeled examples). To enable LLMs to handle arbitrarily long contexts, we generate initial hypotheses from a small number of examples and then update them iteratively to improve the quality of hypotheses. Inspired by multi-armed bandits, we design a reward function to inform the exploitation-exploration tradeoff in the update process. Our algorithm is able to generate hypotheses that enable much better predictive performance than few-shot prompting in classification tasks, improving accuracy by 31.7% on a synthetic dataset and by 13.9%, 3.3% and, 24.9% on three real-world datasets. We also outperform supervised learning by 12.8% and 11.2% on two challenging real-world datasets. Furthermore, we find that the generated hypotheses not only corroborate human-verified theories but also uncover new insights for the tasks.
Submission history
From: Haokun Liu [view email]
[v1]
Fri, 5 Apr 2024 18:00:07 UTC (6,074 KB)
[v2]
Fri, 23 Aug 2024 18:00:00 UTC (6,076 KB)
[v3]
Wed, 18 Dec 2024 19:00:00 UTC (4,886 KB)
Source link
lol