View a PDF of the paper titled MULTI: Multimodal Understanding Leaderboard with Text and Images, by Zichen Zhu and 13 other authors
Abstract:The rapid development of multimodal large language models (MLLMs) raises the question of how they compare to human performance. While existing datasets often feature synthetic or overly simplistic tasks, some models have already surpassed human expert baselines. In this paper, we present MULTI, a Chinese multimodal dataset derived from authentic examination questions. Comprising over 18,000 carefully selected and refined questions, MULTI evaluates models using real-world examination standards, encompassing image-text comprehension, complex reasoning, and knowledge recall. Additionally, We also introduce MULTI-Elite, a 500-question selected hard subset, and MULTI-Extend with more than 4,500 external knowledge context pieces for testing in-context learning capabilities. Our evaluation highlights substantial room for MLLM advancement, with Qwen2-VL-72B achieving a 76.9% accuracy on MULTI and 53.1% on MULTI-Elite leading 25 evaluated models, compared to human expert baselines of 86.1% and 73.1%. MULTI serves not only as a robust evaluation platform but also paves the way for the development of expert-level AI.
Submission history
From: Zichen Zhu [view email]
[v1]
Mon, 5 Feb 2024 16:41:02 UTC (10,211 KB)
[v2]
Tue, 20 Feb 2024 07:55:52 UTC (10,670 KB)
[v3]
Tue, 7 Jan 2025 07:05:05 UTC (5,235 KB)
Source link
lol