View a PDF of the paper titled MoDeGPT: Modular Decomposition for Large Language Model Compression, by Chi-Heng Lin and 7 other authors
Abstract:Large Language Models (LLMs) have reshaped the landscape of artificial intelligence by demonstrating exceptional performance across various tasks. However, substantial computational requirements make their deployment challenging on devices with limited resources. Recently, compression methods using low-rank matrix techniques have shown promise, yet these often lead to degraded accuracy or introduce significant overhead in parameters and inference latency. This paper introduces textbf{Mo}dular textbf{De}composition (MoDeGPT), a novel structured compression framework that does not need recovery fine-tuning while resolving the above drawbacks. MoDeGPT partitions the Transformer block into modules comprised of matrix pairs and reduces the hidden dimensions via reconstructing the module-level outputs. MoDeGPT is developed based on a theoretical framework that utilizes three well-established matrix decomposition algorithms — Nyström approximation, CR decomposition, and SVD — and applies them to our redefined transformer modules. Our comprehensive experiments show MoDeGPT, without backward propagation, matches or surpasses previous structured compression methods that rely on gradient information, and saves 98% of compute costs on compressing a 13B model. On textsc{Llama}-2/3 and OPT models, MoDeGPT maintains 90-95% zero-shot performance with 25-30% compression rates. Moreover, the compression can be done on a single GPU within a few hours and increases the inference throughput by up to 46%.
Submission history
From: Chi-Heng Lin [view email]
[v1]
Mon, 19 Aug 2024 01:30:14 UTC (14,287 KB)
[v2]
Tue, 20 Aug 2024 05:28:27 UTC (14,287 KB)
[v3]
Fri, 13 Sep 2024 05:34:14 UTC (14,287 KB)
Source link
lol