Self-Adapting Large Visual-Language Models to Edge Devices across Visual Modalities

AmazUtah_NLP at SemEval-2024 Task 9: A MultiChoice Question Answering System for Commonsense Defying Reasoning


View a PDF of the paper titled Self-Adapting Large Visual-Language Models to Edge Devices across Visual Modalities, by Kaiwen Cai and 4 other authors

View PDF
HTML (experimental)

Abstract:Recent advancements in Vision-Language (VL) models have sparked interest in their deployment on edge devices, yet challenges in handling diverse visual modalities, manual annotation, and computational constraints remain. We introduce EdgeVL, a novel framework that bridges this gap by seamlessly integrating dual-modality knowledge distillation and quantization-aware contrastive learning. This approach enables the adaptation of large VL models, like CLIP, for efficient use with both RGB and non-RGB images on resource-limited devices without the need for manual annotations. EdgeVL not only transfers visual language alignment capabilities to compact models but also maintains feature quality post-quantization, significantly enhancing open-vocabulary classification performance across various visual modalities. Our work represents the first systematic effort to adapt large VL models for edge deployment, showcasing up to 15.4% accuracy improvements on multiple datasets and up to 93-fold reduction in model size.

Submission history

From: Kaiwen Cai [view email]
[v1]
Thu, 7 Mar 2024 21:34:40 UTC (3,623 KB)
[v2]
Thu, 18 Jul 2024 06:13:41 UTC (12,373 KB)
[v3]
Tue, 1 Oct 2024 14:22:15 UTC (22,111 KB)



Source link
lol

By stp2y

Leave a Reply

Your email address will not be published. Required fields are marked *

No widgets found. Go to Widget page and add the widget in Offcanvas Sidebar Widget Area.