View a PDF of the paper titled MM-Vet v2: A Challenging Benchmark to Evaluate Large Multimodal Models for Integrated Capabilities, by Weihao Yu and 9 other authors
Abstract:MM-Vet, with open-ended vision-language questions targeting at evaluating integrated capabilities, has become one of the most popular benchmarks for large multimodal model evaluation. MM-Vet assesses six core vision-language (VL) capabilities: recognition, knowledge, spatial awareness, language generation, OCR, and math. However, its question format is restricted to single image-text pairs, lacking the interleaved image and text sequences prevalent in real-world scenarios. To address this limitation, we introduce MM-Vet v2, which includes a new VL capability called “image-text sequence understanding”, evaluating models’ ability to process VL sequences. Furthermore, we maintain the high quality of evaluation samples while further expanding the evaluation set size. Using MM-Vet v2 to benchmark large multimodal models, we found that Claude 3.5 Sonnet is the best model with a score of 71.8, slightly outperforming GPT-4o which scored 71.0. Among open-weight models, InternVL2-Llama3-76B leads with a score of 68.4. The code, data, and leaderboard are accessible at this https URL.
Submission history
From: Weihao Yu [view email]
[v1]
Thu, 1 Aug 2024 17:59:54 UTC (1,490 KB)
[v2]
Sun, 1 Dec 2024 06:08:00 UTC (1,490 KB)
Source link
lol