View a PDF of the paper titled Img-Diff: Contrastive Data Synthesis for Multimodal Large Language Models, by Qirui Jiao and 5 other authors
Abstract:High-performance Multimodal Large Language Models (MLLMs) are heavily dependent on data quality. To advance fine-grained image recognition within MLLMs, we introduce a novel data synthesis method inspired by contrastive learning and image difference captioning. Our key idea involves challenging the model to discern both matching and distinct elements by scrutinizing object differences in detailed regions across similar images. We begin by generating pairs of similar images that emphasize object variations. Following this, we employ a Difference Area Generator to pinpoint object differences, and subsequently, a Difference Captions Generator to articulate these differences. This process results in a high-quality dataset of “object replacement” samples, termed Img-Diff, which can be scaled as needed due to its automated nature. We leverage this generated dataset to fine-tune state-of-the-art (SOTA) MLLMs, such as InternVL2, achieving substantial improvements across various image difference and Visual Question Answering tasks. Notably, the trained models significantly outperform existing SOTA models like GPT-4V and Gemini on the MMVP benchmark. Additionally, we conduct comprehensive evaluations to validate the dataset’s diversity, quality, and robustness, offering several insights into the synthesis of such contrastive datasets. We release our codes and dataset to encourage further research on multimodal data synthesis and MLLMs’ fundamental capabilities for image understanding.
Submission history
From: Daoyuan Chen [view email]
[v1]
Thu, 8 Aug 2024 17:10:16 UTC (2,881 KB)
[v2]
Fri, 9 Aug 2024 14:24:34 UTC (2,882 KB)
[v3]
Thu, 19 Dec 2024 11:04:20 UTC (3,900 KB)
Source link
lol