View a PDF of the paper titled KV Shifting Attention Enhances Language Modeling, by Mingyu Xu and 2 other authors
Abstract:The current large language models are mainly based on decode-only structure transformers, which have great in-context learning (ICL) capabilities. It is generally believed that the important foundation of its ICL capability is the induction heads mechanism, which requires at least two layers attention. In order to more efficiently implement the ability of the model’s induction, we revisit the induction heads mechanism and proposed a KV shifting attention. We theoretically prove that the KV shifting attention reducing the model’s requirements for the depth and width of the induction heads mechanism. Our experimental results demonstrate that KV shifting attention is beneficial to learning induction heads and language modeling, which lead to better performance or faster convergence from toy models to the pre-training models with more than 10 B parameters.
Submission history
From: Bingning Wang Dr. [view email]
[v1]
Fri, 29 Nov 2024 09:42:38 UTC (1,732 KB)
[v2]
Thu, 5 Dec 2024 12:19:38 UTC (2,688 KB)
Source link
lol