Extreme Compression of Large Language Models via Additive Quantization

AmazUtah_NLP at SemEval-2024 Task 9: A MultiChoice Question Answering System for Commonsense Defying Reasoning


View a PDF of the paper titled Extreme Compression of Large Language Models via Additive Quantization, by Vage Egiazarian and 5 other authors

View PDF
HTML (experimental)

Abstract:The emergence of accurate open large language models (LLMs) has led to a race towards performant quantization techniques which can enable their execution on end-user devices. In this paper, we revisit the problem of “extreme” LLM compression-defined as targeting extremely low bit counts, such as 2 to 3 bits per parameter-from the point of view of classic methods in Multi-Codebook Quantization (MCQ). Our algorithm, called AQLM, generalizes the classic Additive Quantization (AQ) approach for information retrieval to advance the state-of-the-art in LLM compression, via two innovations: 1) learned additive quantization of weight matrices in input-adaptive fashion, and 2) joint optimization of codebook parameters across each transformer blocks. Broadly, AQLM is the first scheme that is Pareto optimal in terms of accuracy-vs-model-size when compressing to less than 3 bits per parameter, and significantly improves upon all known schemes in the extreme compression (2bit) regime. In addition, AQLM is practical: we provide fast GPU and CPU implementations of AQLM for token generation, which enable us to match or outperform optimized FP16 implementations for speed, while executing in a much smaller memory footprint.

Submission history

From: Vage Egiazarian [view email]
[v1]
Thu, 11 Jan 2024 18:54:44 UTC (2,320 KB)
[v2]
Tue, 6 Feb 2024 18:55:25 UTC (2,993 KB)
[v3]
Sat, 8 Jun 2024 10:55:52 UTC (3,998 KB)
[v4]
Wed, 11 Sep 2024 07:48:26 UTC (3,998 KB)



Source link
lol

By stp2y

Leave a Reply

Your email address will not be published. Required fields are marked *

No widgets found. Go to Widget page and add the widget in Offcanvas Sidebar Widget Area.