Rapid Switching and Multi-Adapter Fusion via Sparse High Rank Adapters

AmazUtah_NLP at SemEval-2024 Task 9: A MultiChoice Question Answering System for Commonsense Defying Reasoning


[Submitted on 22 Jul 2024]

View a PDF of the paper titled Rapid Switching and Multi-Adapter Fusion via Sparse High Rank Adapters, by Kartikeya Bhardwaj and 11 other authors

View PDF

Abstract:In this paper, we propose Sparse High Rank Adapters (SHiRA) that directly finetune 1-2% of the base model weights while leaving others unchanged, thus, resulting in a highly sparse adapter. This high sparsity incurs no inference overhead, enables rapid switching directly in the fused mode, and significantly reduces concept-loss during multi-adapter fusion. Our extensive experiments on LVMs and LLMs demonstrate that finetuning merely 1-2% parameters in the base model is sufficient for many adapter tasks and significantly outperforms Low Rank Adaptation (LoRA). We also show that SHiRA is orthogonal to advanced LoRA methods such as DoRA and can be easily combined with existing techniques.

Submission history

From: Nilesh Prasad Pandey [view email]
[v1]
Mon, 22 Jul 2024 22:46:36 UTC (14,323 KB)



Source link
lol

By stp2y

Leave a Reply

Your email address will not be published. Required fields are marked *

No widgets found. Go to Widget page and add the widget in Offcanvas Sidebar Widget Area.