View a PDF of the paper titled BlockDialect: Block-wise Fine-grained Mixed Format Quantization for Energy-Efficient LLM Inference, by Wonsuk Jang and 1 other authors
Abstract:The rapidly increasing size of large language models (LLMs) presents significant challenges in memory usage and computational costs. Quantizing both weights and activations can address these issues, with hardware-supported fine-grained scaling emerging as a promising solution to mitigate outliers. However, existing methods struggle to capture nuanced block data distributions. We propose BlockDialect, a block-wise fine-grained mixed format technique that assigns a per-block optimal number format from a formatbook for better data representation. Additionally, we introduce DialectFP4, a formatbook of FP4 variants (akin to dialects) that adapt to diverse data distributions. To leverage this efficiently, we propose a two-stage approach for online DialectFP4 activation quantization. Importantly, DialectFP4 ensures energy efficiency by selecting representable values as scaled integers compatible with low-precision integer arithmetic. BlockDialect achieves 10.78% (7.48%) accuracy gain on the LLaMA3-8B (LLaMA2-7B) model compared to MXFP4 format with lower bit usage per data, while being only 5.45% (2.69%) below full precision even when quantizing full-path matrix multiplication. Focusing on how to represent over how to scale, our work presents a promising path for energy-efficient LLM inference.
Submission history
From: Wonsuk Jang [view email]
[v1]
Thu, 2 Jan 2025 08:57:00 UTC (1,735 KB)
[v2]
Fri, 3 Jan 2025 09:27:46 UTC (1,188 KB)
[v3]
Tue, 21 Jan 2025 07:34:54 UTC (3,383 KB)
Source link
lol