View a PDF of the paper titled Jumping through Local Minima: Quantization in the Loss Landscape of Vision Transformers, by Natalia Frumkin and 2 other authors
Abstract:Quantization scale and bit-width are the most important parameters when considering how to quantize a neural network. Prior work focuses on optimizing quantization scales in a global manner through gradient methods (gradient descent & Hessian analysis). Yet, when applying perturbations to quantization scales, we observe a very jagged, highly non-smooth test loss landscape. In fact, small perturbations in quantization scale can greatly affect accuracy, yielding a $0.5-0.8%$ accuracy boost in 4-bit quantized vision transformers (ViTs). In this regime, gradient methods break down, since they cannot reliably reach local minima. In our work, dubbed Evol-Q, we use evolutionary search to effectively traverse the non-smooth landscape. Additionally, we propose using an infoNCE loss, which not only helps combat overfitting on the small calibration dataset ($1,000$ images) but also makes traversing such a highly non-smooth surface easier. Evol-Q improves the top-1 accuracy of a fully quantized ViT-Base by $10.30%$, $0.78%$, and $0.15%$ for $3$-bit, $4$-bit, and $8$-bit weight quantization levels. Extensive experiments on a variety of CNN and ViT architectures further demonstrate its robustness in extreme quantization scenarios. Our code is available at this https URL
Submission history
From: Natalia Frumkin [view email]
[v1]
Mon, 21 Aug 2023 16:03:35 UTC (2,774 KB)
[v2]
Sun, 29 Oct 2023 23:00:05 UTC (2,774 KB)
[v3]
Thu, 26 Sep 2024 15:37:58 UTC (5,806 KB)
Source link
lol