View a PDF of the paper titled MAMA: Meta-optimized Angular Margin Contrastive Framework for Video-Language Representation Learning, by Thong Nguyen and 8 other authors
Abstract:Data quality stands at the forefront of deciding the effectiveness of video-language representation learning. However, video-text pairs in previous data typically do not align perfectly with each other, which might lead to video-language representations that do not accurately reflect cross-modal semantics. Moreover, previous data also possess an uneven distribution of concepts, thereby hampering the downstream performance across unpopular subjects. To address these problems, we propose MAMA, a new approach to learning video-language representations by utilizing a contrastive objective with a subtractive angular margin to regularize cross-modal representations in their effort to reach perfect similarity. Furthermore, to adapt to the non-uniform concept distribution, MAMA utilizes a multi-layer perceptron (MLP)-parameterized weighting function that maps loss values to sample weights which enable dynamic adjustment of the model’s focus throughout the training. With the training guided by a small amount of unbiased meta-data and augmented by video-text data generated by large vision-language model, MAMA improves video-language representations and achieve superior performances on commonly used video question answering and text-video retrieval datasets. The code, model, and data have been made available at this https URL.
Submission history
From: Thong Nguyen [view email]
[v1]
Thu, 4 Jul 2024 09:52:17 UTC (37,631 KB)
[v2]
Sat, 20 Jul 2024 03:15:26 UTC (37,632 KB)
[v3]
Tue, 8 Oct 2024 06:02:31 UTC (38,638 KB)
[v4]
Thu, 10 Oct 2024 02:10:16 UTC (38,638 KB)
Source link
lol