View a PDF of the paper titled Density estimation with LLMs: a geometric investigation of in-context learning trajectories, by Toni J.B. Liu and 3 other authors
Abstract:Large language models (LLMs) demonstrate remarkable emergent abilities to perform in-context learning across various tasks, including time series forecasting. This work investigates LLMs’ ability to estimate probability density functions (PDFs) from data observed in-context; such density estimation (DE) is a fundamental task underlying many probabilistic modeling problems. We leverage the Intensive Principal Component Analysis (InPCA) to visualize and analyze the in-context learning dynamics of LLaMA-2 models. Our main finding is that these LLMs all follow similar learning trajectories in a low-dimensional InPCA space, which are distinct from those of traditional density estimation methods like histograms and Gaussian kernel density estimation (KDE). We interpret the LLaMA in-context DE process as a KDE with an adaptive kernel width and shape. This custom kernel model captures a significant portion of LLaMA’s behavior despite having only two parameters. We further speculate on why LLaMA’s kernel width and shape differs from classical algorithms, providing insights into the mechanism of in-context probabilistic reasoning in LLMs.
Submission history
From: Jianbang Liu [view email]
[v1]
Mon, 7 Oct 2024 17:22:56 UTC (11,184 KB)
[v2]
Wed, 9 Oct 2024 22:23:20 UTC (11,184 KB)
Source link
lol