Understanding the RoPE Extensions of Long-Context LLMs: An Attention Perspective

AmazUtah_NLP at SemEval-2024 Task 9: A MultiChoice Question Answering System for Commonsense Defying Reasoning


View a PDF of the paper titled Understanding the RoPE Extensions of Long-Context LLMs: An Attention Perspective, by Meizhi Zhong and 7 other authors

View PDF
HTML (experimental)

Abstract:Enabling LLMs to handle lengthy context is currently a research hotspot. Most LLMs are built upon rotary position embedding (RoPE), a popular position encoding method. Therefore, a prominent path is to extrapolate the RoPE trained on comparably short texts to far longer texts. A heavy bunch of efforts have been dedicated to boosting the extrapolation via extending the formulations of the RoPE, however, few of them have attempted to showcase their inner workings comprehensively. In this paper, we are driven to offer a straightforward yet in-depth understanding of RoPE extensions from an attention perspective and on two benchmarking tasks. A broad array of experiments reveals several valuable findings: 1) Maintaining attention patterns to those at the pretrained length improves extrapolation; 2) Large attention uncertainty leads to retrieval errors; 3) Using longer continual pretraining lengths for RoPE extensions could reduce attention uncertainty and significantly enhance extrapolation.

Submission history

From: Meizhi Zhong [view email]
[v1]
Wed, 19 Jun 2024 07:23:33 UTC (20,341 KB)
[v2]
Tue, 29 Oct 2024 11:29:31 UTC (20,341 KB)
[v3]
Thu, 12 Dec 2024 08:00:36 UTC (10,837 KB)



Source link
lol

By stp2y

Leave a Reply

Your email address will not be published. Required fields are marked *

No widgets found. Go to Widget page and add the widget in Offcanvas Sidebar Widget Area.