View a PDF of the paper titled Toward Adaptive Large Language Models Structured Pruning via Hybrid-grained Weight Importance Assessment, by Jun Liu and 11 other authors
Abstract:Structured pruning for large language models (LLMs) has garnered significant academic interest due to its ability to efficiently compress and accelerate LLMs by eliminating redundant weight groups at a coarse-grained granularity. Current structured pruning methods for LLMs typically depend on a singular granularity for assessing weight importance, resulting in notable performance degradation in downstream tasks. Intriguingly, our empirical investigations reveal that utilizing unstructured pruning, which achieves better performance retention by pruning weights at a finer granularity, emph{i.e.}, individual weights, yields significantly varied sparse LLM structures when juxtaposed to structured pruning. This suggests that evaluating both holistic and individual assessment for weight importance is essential for LLM pruning. Building on this insight, we introduce the Hybrid-grained Weight Importance Assessment (HyWIA), a novel method that merges fine-grained and coarse-grained evaluations of weight importance for the pruning of LLMs. Leveraging an attention mechanism, HyWIA adaptively determines the optimal blend of granularity in weight importance assessments in an end-to-end pruning manner. Extensive experiments on LLaMA-V1/V2, Vicuna, Baichuan, and Bloom across various benchmarks demonstrate the effectiveness of HyWIA in pruning LLMs. For example, HyWIA surpasses the cutting-edge LLM-Pruner by an average margin of 2.82% in accuracy across seven downstream tasks when pruning LLaMA-7B by 50%.
Submission history
From: Jun Liu [view email]
[v1]
Sat, 16 Mar 2024 04:12:50 UTC (533 KB)
[v2]
Tue, 14 May 2024 12:50:55 UTC (461 KB)
[v3]
Wed, 15 May 2024 02:20:54 UTC (461 KB)
[v4]
Mon, 16 Dec 2024 18:31:27 UTC (1,093 KB)
[v5]
Sun, 12 Jan 2025 06:47:39 UTC (1,037 KB)
Source link
lol