Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More
As enterprises around the world double down on their AI projects, the availability of high-quality training data has become a major bottleneck. While the public web has largely been exhausted as a data source, major players like OpenAI and Google are securing exclusive partnerships to expand their proprietary datasets, further limiting access for others.
To address this growing concern, Salesforce has taken a major step in the arena of visual training data. The company has just introduced ProVision, a novel framework that programmatically generates visual instruction data. These datasets are systematically synthesized to enable the training of high-performance multimodal language models (MLMs) that can answer questions about images.
The company has already released the ProVision-10M dataset with this approach and is employing it to boost the performance and accuracy of various multimodal AI models.
For data professionals, this framework represents a significant advancement. By programmatically generating high-quality visual instruction data, ProVision alleviates the dependency on limited or inconsistently labeled datasets, a common challenge in training multimodal systems.
Moreover, the ability to systematically synthesize datasets ensures better control, scalability and consistency, enabling faster iteration cycles and reducing the cost of acquiring domain-specific data. This work complements ongoing research in the synthetic data generation domain and comes just a day after Nvidia’s launch of Cosmos, a suite of world foundation models purpose-built for generating physics-based videos from a combination of inputs, like text, image and video, for physical AI training.
Visual instruction data: a key ingredient for multimodal AI
Today, instruction datasets are the core of AI pre-training or fine-tuning. These specialized datasets help models follow and effectively respond to specific instructions or queries. In the case of multimodal AI, the models get the ability to analyze content such as images after learning from a swathe of different data points, accompanied by question-answer pairs — or visual instruction data — describing them.
Now, here’s the thing: Producing these visual instruction datasets is quite a hassle. If an enterprise creates the data manually for each training image, it ends up wasting a lot of time and human resources to complete the project. On the other hand, if it chooses to use proprietary language models for the task, it has to deal with high computational costs and the risk of hallucinations, where the quality and accuracy of the question-answer pairs may not be good enough.
Further, using proprietary models is also a black-box mechanism as it makes it difficult to interpret the process of data generation and control or customize outputs precisely.
Enter Salesforce ProVision
To address these gaps, the AI research team at Salesforce has come up with ProVision, a framework that employs scene graphs in conjunction with human-written programs to systematically synthesize vision-centric instruction data.
At the core, a scene graph can be described as a structured representation of image semantics, where the objects in the content are represented as nodes. The attributes of each object — like color or size — are directly assigned to their respective nodes, while the relationships between these objects are depicted as directed edges connecting the corresponding nodes. These representations can be sourced from manually annotated datasets such as Visual Genome, or they can be generated with the help of a scene graph generation pipeline that combines various state-of-the-art vision models covering various aspects of image semantics, from object and attribute detection to depth estimation.
Once the scene graphs are ready, they power programs written using Python and textual templates that serve as full-fledged data generators capable of creating question-and-answer pairs for AI training pipelines.
“Each [data] generator utilizes hundreds of pre-defined templates, which systematically integrate these annotations to produce diverse instruction data. These generators are crafted to…compare, retrieve, and reason about basic visual concepts of objects, attributes, and relations based on the detailed information encoded in each scene graph,” the researchers behind the framework wrote in a paper.
ProVision-10M dataset for AI training
In its work, Salesforce used both approaches — augmentation of manually annotated scene graphs and generation from scratch — to set up scene graphs powering 24 single-image data generators and 14 multi-image generators.
“With these data generators, we can automatically synthesize questions and answers given an image’s scene graph. For example, given an image of a busy street, ProVision can generate questions such as, “What is the relationship between the pedestrian and the car?” or “Which object is closer to the red building, [the] car or pedestrian?” lead researchers Jieyu Zhang and Le Xue noted in a blog post.
The data generators with the first approach, augmenting Visual Genome’s scene graphs with depth and segmentation annotation from Depth Anything V2 and SAM-2, helped them create 1.5 million single-image instruction data points and 4.2 million multi-image instruction data points. Meanwhile, the other, using 120,000 high-res images from the DataComp dataset and models such as Yolo-World, Coca, Llava-1.5 and Osprey, generated 2.3 million single-image instruction data points and 4.2 million multi-image instruction data points.
In all, the four splits combined make up ProVision-10M, a dataset with more than 10 million unique instruction data points. It is now available on Hugging Face and already proving to be very effective in AI training pipelines.
Specifically, when the company incorporated ProVision-10M in multimodal AI fine-tuning recipes — LLaVA-1.5 for single-image instruction data and Mantis-SigLIP-8B for multi-image instruction data — it saw notable improvements, with the average performance of the models being higher than with fine-tuning without ProVision data.
“When adopted in the instruction tuning stage, our single-image instruction data yields up to a 7% improvement on the 2D split and 8% on the 3D split of CVBench, along with a 3% increase in performance on QBench2, RealWorldQA, and MMMU. Our multi-image instruction data leads to an 8% improvement on Mantis-Eval,” the researchers noted in the paper.
Synthetic data is here to stay
While there are several tools and platforms, including the new Cosmos world foundation models from Nvidia, for generating different modalities of data (from images to videos) that can used for multimodal AI training, only a handful have looked at the problem of creating the instruction datasets that pair with that data.
Salesforce is addressing that bottleneck with ProVision, giving enterprises a way to go beyond manual labeling or black-boxed language models. The approach of generating instruction data programmatically ensures interpretability and controllability of the generation process and scales efficiently while maintaining factual accuracy.
In the long run, the company hopes researchers can build on this work to enhance the scene graph generation pipelines and create more data generators covering new types of instruction data, such as those for videos.
Source link lol