SageAttention: Accurate 8-Bit Attention for Plug-and-play Inference Acceleration

Enhancing GitHub Actions CI for FastAPI: Build, Test, and Publish - PyImageSearch


View a PDF of the paper titled SageAttention: Accurate 8-Bit Attention for Plug-and-play Inference Acceleration, by Jintao Zhang and 5 other authors

View PDF
HTML (experimental)

Abstract:The transformer architecture predominates across various models. As the heart of the transformer, attention has a computational complexity of O(N^2), compared to O(N) for linear transformations. When handling large sequence lengths, attention becomes the primary time-consuming component. Although quantization has proven to be an effective method for accelerating model inference, existing quantization methods primarily focus on optimizing the linear layer. In response, we first analyze the feasibility of quantization in attention detailedly. Following that, we propose SageAttention, a highly efficient and accurate quantization method for attention. The OPS (operations per second) of our approach outperforms FlashAttention2 and xformers by about 2.1 times and 2.7 times, respectively. SageAttention also achieves superior accuracy performance over FlashAttention3. Comprehensive experiments confirm that our approach incurs almost no end-to-end metrics loss across diverse models, including those for large language processing, image generation, and video generation. The codes are available at this https URL.

Submission history

From: Jintao Zhang [view email]
[v1]
Thu, 3 Oct 2024 10:25:23 UTC (14,727 KB)
[v2]
Thu, 14 Nov 2024 08:39:54 UTC (13,840 KB)
[v3]
Tue, 24 Dec 2024 02:29:17 UTC (32,831 KB)



Source link
lol

By stp2y

Leave a Reply

Your email address will not be published. Required fields are marked *

No widgets found. Go to Widget page and add the widget in Offcanvas Sidebar Widget Area.