KV Shifting Attention Enhances Language Modeling

AmazUtah_NLP at SemEval-2024 Task 9: A MultiChoice Question Answering System for Commonsense Defying Reasoning


View a PDF of the paper titled KV Shifting Attention Enhances Language Modeling, by Mingyu Xu and 2 other authors

View PDF
HTML (experimental)

Abstract:The current large language models are mainly based on decode-only structure transformers, which have great in-context learning (ICL) capabilities. It is generally believed that the important foundation of its ICL capability is the induction heads mechanism, which requires at least two layers attention. In order to more efficiently implement the ability of the model’s induction, we revisit the induction heads mechanism and proposed a KV shifting attention. We theoretically prove that the KV shifting attention reducing the model’s requirements for the depth and width of the induction heads mechanism. Our experimental results demonstrate that KV shifting attention is beneficial to learning induction heads and language modeling, which lead to better performance or faster convergence from toy models to the pre-training models with more than 10 B parameters.

Submission history

From: Bingning Wang Dr. [view email]
[v1]
Fri, 29 Nov 2024 09:42:38 UTC (1,732 KB)
[v2]
Thu, 5 Dec 2024 12:19:38 UTC (2,688 KB)



Source link
lol

By stp2y

Leave a Reply

Your email address will not be published. Required fields are marked *

No widgets found. Go to Widget page and add the widget in Offcanvas Sidebar Widget Area.