View a PDF of the paper titled ASVD: Activation-aware Singular Value Decomposition for Compressing Large Language Models, by Zhihang Yuan and 5 other authors
Abstract:In this paper, we introduce a new post-training compression paradigm for Large Language Models (LLMs) to facilitate their wider adoption. We delve into LLM weight low-rank decomposition, and find that the challenges of this task stem from the distribution variance in the LLM activations and the sensitivity difference among various kinds of layers. To address these issues, we propose a training-free approach called Activation-aware Singular Value Decomposition (ASVD). Specifically, ASVD manages activation outliers by transforming the weight matrix based on the activation distribution. This transformation allows the outliers in the activation matrix to be absorbed into the transformed weight matrix, thereby enhancing decomposition accuracy. Additionally, we propose an efficient iterative calibration process to optimize layer-specific decomposition by addressing the varying sensitivity of different LLM layers. In this way, ASVD can compress a network by 10%-30%. Based on the success of the low-rank decomposition of projection matrices in the self-attention module, we further introduce ASVD to compress the KV cache. By reducing the channel dimension of KV activations, memory requirements for KV cache can be largely reduced. ASVD can further achieve 50% KV cache reductions without performance drop in a training-free manner. Code is anonymously available in supplementary materials.
Submission history
From: Zhihang Yuan [view email]
[v1]
Sun, 10 Dec 2023 08:41:24 UTC (630 KB)
[v2]
Fri, 24 May 2024 06:28:15 UTC (997 KB)
[v3]
Wed, 18 Sep 2024 04:53:46 UTC (997 KB)
[v4]
Tue, 29 Oct 2024 12:28:58 UTC (1,010 KB)
Source link
lol