View a PDF of the paper titled From the Least to the Most: Building a Plug-and-Play Visual Reasoner via Data Synthesis, by Chuanqi Cheng and 3 other authors
Abstract:We explore multi-step reasoning in vision-language models (VLMs). The problem is challenging, as reasoning data consisting of multiple steps of visual and language processing are barely available. To overcome the challenge, we first introduce a least-to-most visual reasoning paradigm, which interleaves steps of decomposing a question into sub-questions and invoking external tools for resolving sub-questions. Based on the paradigm, we further propose a novel data synthesis approach that can automatically create questions and multi-step reasoning paths for an image in a bottom-up manner. Our approach divides the complex synthesis task into a few simple sub-tasks, and (almost entirely) relies on open-sourced models to accomplish the sub-tasks. Therefore, the entire synthesis process is reproducible and cost-efficient, and the synthesized data is quality guaranteed. With the approach, we construct $50$k visual reasoning examples. Then, we develop a visual reasoner through supervised fine-tuning, which is capable of generally enhancing the reasoning abilities of a wide range of existing VLMs in a plug-and-play fashion. Extensive experiments indicate that the visual reasoner can consistently and significantly improve four VLMs on four VQA benchmarks. Our code and dataset are available at this https URL.
Submission history
From: Chuanqi Cheng [view email]
[v1]
Fri, 28 Jun 2024 14:04:10 UTC (12,387 KB)
[v2]
Fri, 11 Oct 2024 15:41:23 UTC (12,391 KB)
Source link
lol