Mitigate the Gap: Investigating Approaches for Improving Cross-Modal Alignment in CLIP

AmazUtah_NLP at SemEval-2024 Task 9: A MultiChoice Question Answering System for Commonsense Defying Reasoning


View a PDF of the paper titled Mitigate the Gap: Investigating Approaches for Improving Cross-Modal Alignment in CLIP, by Sedigheh Eslami and Gerard de Melo

View PDF
HTML (experimental)

Abstract:Contrastive Language–Image Pre-training (CLIP) has manifested remarkable improvements in zero-shot classification and cross-modal vision-language tasks. Yet, from a geometrical point of view, the CLIP embedding space has been found to have a pronounced modality gap. This gap renders the embedding space overly sparse and disconnected, with different modalities being densely distributed in distinct subregions of the hypersphere. In this work, we aim at answering three main questions: 1. Does sharing the parameter space between the multi-modal encoders reduce the modality gap? 2. Can the gap be mitigated by pushing apart the uni-modal embeddings via intra-modality separation? 3. How do these gap reduction approaches affect the downstream performance? We design AlignCLIP, in order to answer these questions and through extensive experiments, we show that AlignCLIP achieves noticeable enhancements in the cross-modal alignment of the embeddings, and thereby, reduces the modality gap, while improving the performance across several zero-shot and fine-tuning downstream evaluations.

Submission history

From: Sedigheh Eslami [view email]
[v1]
Tue, 25 Jun 2024 15:24:02 UTC (11,390 KB)
[v2]
Wed, 26 Jun 2024 10:58:48 UTC (11,400 KB)
[v3]
Mon, 16 Sep 2024 15:32:11 UTC (9,672 KB)



Source link
lol

By stp2y

Leave a Reply

Your email address will not be published. Required fields are marked *

No widgets found. Go to Widget page and add the widget in Offcanvas Sidebar Widget Area.