Transferable and Principled Efficiency for Open-Vocabulary Segmentation

AmazUtah_NLP at SemEval-2024 Task 9: A MultiChoice Question Answering System for Commonsense Defying Reasoning


View a PDF of the paper titled Transferable and Principled Efficiency for Open-Vocabulary Segmentation, by Jingxuan Xu and 3 other authors

View PDF
HTML (experimental)

Abstract:Recent success of pre-trained foundation vision-language models makes Open-Vocabulary Segmentation (OVS) possible. Despite the promising performance, this approach introduces heavy computational overheads for two challenges: 1) large model sizes of the backbone; 2) expensive costs during the fine-tuning. These challenges hinder this OVS strategy from being widely applicable and affordable in real-world scenarios. Although traditional methods such as model compression and efficient fine-tuning can address these challenges, they often rely on heuristics. This means that their solutions cannot be easily transferred and necessitate re-training on different models, which comes at a cost. In the context of efficient OVS, we target achieving performance that is comparable to or even better than prior OVS works based on large vision-language foundation models, by utilizing smaller models that incur lower training costs. The core strategy is to make our efficiency principled and thus seamlessly transferable from one OVS framework to others without further customization. Comprehensive experiments on diverse OVS benchmarks demonstrate our superior trade-off between segmentation accuracy and computation costs over previous works. Our code is available on this https URL

Submission history

From: Jingxuan Xu [view email]
[v1]
Thu, 11 Apr 2024 03:08:53 UTC (7,000 KB)
[v2]
Tue, 4 Jun 2024 03:15:51 UTC (7,000 KB)
[v3]
Tue, 17 Sep 2024 03:21:01 UTC (7,000 KB)



Source link
lol

By stp2y

Leave a Reply

Your email address will not be published. Required fields are marked *

No widgets found. Go to Widget page and add the widget in Offcanvas Sidebar Widget Area.