Pruning By Explaining Revisited: Optimizing Attribution Methods to Prune CNNs and Transformers

AmazUtah_NLP at SemEval-2024 Task 9: A MultiChoice Question Answering System for Commonsense Defying Reasoning


View a PDF of the paper titled Pruning By Explaining Revisited: Optimizing Attribution Methods to Prune CNNs and Transformers, by Sayed Mohammad Vakilzadeh Hatefi and 5 other authors

View PDF
HTML (experimental)

Abstract:To solve ever more complex problems, Deep Neural Networks are scaled to billions of parameters, leading to huge computational costs. An effective approach to reduce computational requirements and increase efficiency is to prune unnecessary components of these often over-parameterized networks. Previous work has shown that attribution methods from the field of eXplainable AI serve as effective means to extract and prune the least relevant network components in a few-shot fashion. We extend the current state by proposing to explicitly optimize hyperparameters of attribution methods for the task of pruning, and further include transformer-based networks in our analysis. Our approach yields higher model compression rates of large transformer- and convolutional architectures (VGG, ResNet, ViT) compared to previous works, while still attaining high performance on ImageNet classification tasks. Here, our experiments indicate that transformers have a higher degree of over-parameterization compared to convolutional neural networks. Code is available at this https URL.

Submission history

From: Sayed Mohammad Vakilzadeh Hatefi [view email]
[v1]
Thu, 22 Aug 2024 17:35:18 UTC (34,216 KB)
[v2]
Wed, 23 Oct 2024 17:53:24 UTC (10,904 KB)



Source link
lol

By stp2y

Leave a Reply

Your email address will not be published. Required fields are marked *

No widgets found. Go to Widget page and add the widget in Offcanvas Sidebar Widget Area.