View a PDF of the paper titled How to Bridge the Gap between Modalities: Survey on Multimodal Large Language Model, by Shezheng Song and 7 other authors
Abstract:We explore Multimodal Large Language Models (MLLMs), which integrate LLMs like GPT-4 to handle multimodal data, including text, images, audio, and more. MLLMs demonstrate capabilities such as generating image captions and answering image-based questions, bridging the gap towards real-world human-computer interactions and hinting at a potential pathway to artificial general intelligence. However, MLLMs still face challenges in addressing the semantic gap in multimodal data, which may lead to erroneous outputs, posing potential risks to society. Selecting the appropriate modality alignment method is crucial, as improper methods might require more parameters without significant performance improvements. This paper aims to explore modality alignment methods for LLMs and their current capabilities. Implementing effective modality alignment can help LLMs address environmental issues and enhance accessibility. The study surveys existing modality alignment methods for MLLMs, categorizing them into four groups: (1) Multimodal Converter, which transforms data into a format that LLMs can understand; (2) Multimodal Perceiver, which improves how LLMs percieve different types of data; (3) Tool Learning, which leverages external tools to convert data into a common format, usually text; and (4) Data-Driven Method, which teaches LLMs to understand specific data types within datasets.
Submission history
From: Shezheng Song [view email]
[v1]
Fri, 10 Nov 2023 09:51:24 UTC (1,478 KB)
[v2]
Tue, 19 Dec 2023 03:44:25 UTC (2,307 KB)
[v3]
Wed, 8 Jan 2025 02:33:37 UTC (5,122 KB)
Source link
lol