[Submitted on 25 Oct 2024]
View a PDF of the paper titled Improving Multimodal Large Language Models Using Continual Learning, by Shikhar Srivastava and 3 other authors
Abstract:Generative large language models (LLMs) exhibit impressive capabilities, which can be further augmented by integrating a pre-trained vision model into the original LLM to create a multimodal LLM (MLLM). However, this integration often significantly decreases performance on natural language understanding and generation tasks, compared to the original LLM. This study investigates this issue using the LLaVA MLLM, treating the integration as a continual learning problem. We evaluate five continual learning methods to mitigate forgetting and identify a technique that enhances visual understanding while minimizing linguistic performance loss. Our approach reduces linguistic performance degradation by up to 15% over the LLaVA recipe, while maintaining high multimodal accuracy. We also demonstrate the robustness of our method through continual learning on a sequence of vision-language tasks, effectively preserving linguistic skills while acquiring new multimodal capabilities.
Submission history
From: Shikhar Srivastava [view email]
[v1]
Fri, 25 Oct 2024 18:50:40 UTC (1,321 KB)
Source link
lol