Head-wise Shareable Attention for Large Language Models

AmazUtah_NLP at SemEval-2024 Task 9: A MultiChoice Question Answering System for Commonsense Defying Reasoning


View a PDF of the paper titled Head-wise Shareable Attention for Large Language Models, by Zouying Cao and 2 other authors

View PDF
HTML (experimental)

Abstract:Large Language Models (LLMs) suffer from huge number of parameters, which restricts their deployment on edge devices. Weight sharing is one promising solution that encourages weight reuse, effectively reducing memory usage with less performance drop. However, current weight sharing techniques primarily focus on small-scale models like BERT and employ coarse-grained sharing rules, e.g., layer-wise. This becomes limiting given the prevalence of LLMs and sharing an entire layer or block obviously diminishes the flexibility of weight sharing. In this paper, we present a perspective on head-wise shareable attention for large language models. We further propose two memory-efficient methods that share parameters across attention heads, with a specific focus on LLMs. Both of them use the same dynamic strategy to select the shared weight matrices. The first method directly reuses the pre-trained weights without retraining, denoted as $textbf{DirectShare}$. The second method first post-trains with constraint on weight matrix similarity and then shares, denoted as $textbf{PostShare}$. Experimental results reveal our head-wise shared models still maintain satisfactory capabilities, demonstrating the feasibility of fine-grained weight sharing applied to LLMs.

Submission history

From: Zouying Cao [view email]
[v1]
Mon, 19 Feb 2024 04:19:36 UTC (7,779 KB)
[v2]
Sun, 6 Oct 2024 03:30:58 UTC (7,828 KB)
[v3]
Thu, 24 Oct 2024 05:53:18 UTC (7,828 KB)



Source link
lol

By stp2y

Leave a Reply

Your email address will not be published. Required fields are marked *

No widgets found. Go to Widget page and add the widget in Offcanvas Sidebar Widget Area.