Probing Multimodal Large Language Models for Global and Local Semantic Representations

AmazUtah_NLP at SemEval-2024 Task 9: A MultiChoice Question Answering System for Commonsense Defying Reasoning


View a PDF of the paper titled Probing Multimodal Large Language Models for Global and Local Semantic Representations, by Mingxu Tao and 5 other authors

View PDF
HTML (experimental)

Abstract:The advancement of Multimodal Large Language Models (MLLMs) has greatly accelerated the development of applications in understanding integrated texts and images. Recent works leverage image-caption datasets to train MLLMs, achieving state-of-the-art performance on image-to-text tasks. However, there are few studies exploring which layers of MLLMs make the most effort to the global image information, which plays vital roles in multimodal comprehension and generation. In this study, we find that the intermediate layers of models can encode more global semantic information, whose representation vectors perform better on visual-language entailment tasks, rather than the topmost layers. We further probe models regarding local semantic representations through object recognition tasks. We find that the topmost layers may excessively focus on local information, leading to a diminished ability to encode global information. Our code and data are released via this https URL.

Submission history

From: Mingxu Tao [view email]
[v1]
Tue, 27 Feb 2024 08:27:15 UTC (841 KB)
[v2]
Wed, 27 Mar 2024 02:59:57 UTC (850 KB)
[v3]
Thu, 21 Nov 2024 07:03:33 UTC (6,899 KB)



Source link
lol

By stp2y

Leave a Reply

Your email address will not be published. Required fields are marked *

No widgets found. Go to Widget page and add the widget in Offcanvas Sidebar Widget Area.