View a PDF of the paper titled Token-based Decision Criteria Are Suboptimal in In-context Learning, by Hakaze Cho and 5 other authors
Abstract:In-Context Learning (ICL) typically utilizes classification criteria from output probabilities of manually selected label tokens. However, we argue that such token-based classification criteria lead to suboptimal decision boundaries, despite delicate calibrations through translation and constrained rotation applied. To address this problem, we propose Hidden Calibration, which renounces token probabilities and uses the nearest centroid classifier on the LM’s last hidden states. In detail, we assign the label of the nearest centroid previously estimated from a calibration set to the test sample as the predicted label. Our experiments on 6 models and 10 classification datasets indicate that Hidden Calibration consistently outperforms current token-based baselines by about 20%~50%, achieving a strong state-of-the-art in ICL. Our further analysis demonstrates that Hidden Calibration finds better classification criteria with less inter-class overlap, and LMs provide linearly separable intra-class clusters with the help of demonstrations, which supports Hidden Calibration and gives new insights into the principle of ICL.
Submission history
From: Cho Hakaze [view email]
[v1]
Mon, 24 Jun 2024 11:16:26 UTC (2,941 KB)
[v2]
Wed, 16 Oct 2024 12:00:46 UTC (5,870 KB)
Source link
lol