HydraViT: Stacking Heads for a Scalable ViT

Enhancing GitHub Actions CI for FastAPI: Build, Test, and Publish - PyImageSearch


View a PDF of the paper titled HydraViT: Stacking Heads for a Scalable ViT, by Janek Haberer and 2 other authors

View PDF
HTML (experimental)

Abstract:The architecture of Vision Transformers (ViTs), particularly the Multi-head Attention (MHA) mechanism, imposes substantial hardware demands. Deploying ViTs on devices with varying constraints, such as mobile phones, requires multiple models of different sizes. However, this approach has limitations, such as training and storing each required model separately. This paper introduces HydraViT, a novel approach that addresses these limitations by stacking attention heads to achieve a scalable ViT. By repeatedly changing the size of the embedded dimensions throughout each layer and their corresponding number of attention heads in MHA during training, HydraViT induces multiple subnetworks. Thereby, HydraViT achieves adaptability across a wide spectrum of hardware environments while maintaining performance. Our experimental results demonstrate the efficacy of HydraViT in achieving a scalable ViT with up to 10 subnetworks, covering a wide range of resource constraints. HydraViT achieves up to 5 p.p. more accuracy with the same GMACs and up to 7 p.p. more accuracy with the same throughput on ImageNet-1K compared to the baselines, making it an effective solution for scenarios where hardware availability is diverse or varies over time. Source code available at this https URL.

Submission history

From: Janek Haberer [view email]
[v1]
Thu, 26 Sep 2024 15:52:36 UTC (2,773 KB)
[v2]
Thu, 5 Dec 2024 16:24:15 UTC (2,984 KB)



Source link
lol

By stp2y

Leave a Reply

Your email address will not be published. Required fields are marked *

No widgets found. Go to Widget page and add the widget in Offcanvas Sidebar Widget Area.