View a PDF of the paper titled Do Influence Functions Work on Large Language Models?, by Zhe Li and 3 other authors
Abstract:Influence functions are important for quantifying the impact of individual training data points on a model’s predictions. Although extensive research has been conducted on influence functions in traditional machine learning models, their application to large language models (LLMs) has been limited. In this work, we conduct a systematic study to address a key question: do influence functions work on LLMs? Specifically, we evaluate influence functions across multiple tasks and find that they consistently perform poorly in most settings. Our further investigation reveals that their poor performance can be attributed to: (1) inevitable approximation errors when estimating the iHVP component due to the scale of LLMs, (2) uncertain convergence during fine-tuning, and, more fundamentally, (3) the definition itself, as changes in model parameters do not necessarily correlate with changes in LLM behavior. Thus, our study suggests the need for alternative approaches for identifying influential samples.
Submission history
From: Zhe Li [view email]
[v1]
Mon, 30 Sep 2024 06:50:18 UTC (1,162 KB)
[v2]
Thu, 19 Dec 2024 19:33:43 UTC (1,780 KB)
Source link
lol