View a PDF of the paper titled Benchmark Evaluations, Applications, and Challenges of Large Vision Language Models: A Survey, by Zongxia Li and 4 other authors
Abstract:Multimodal Vision Language Models (VLMs) have emerged as a transformative technology at the intersection of computer vision and natural language processing, enabling machines to perceive and reason about the world through both visual and textual modalities. For example, models such as CLIP, Claude, and GPT-4V demonstrate strong reasoning and understanding abilities on visual and textual data and beat classical single modality vision models on zero-shot classification. Despite their rapid advancements in research and growing popularity in applications, a comprehensive survey of existing studies on VLMs is notably lacking, particularly for researchers aiming to leverage VLMs in their specific domains. To this end, we provide a systematic overview of VLMs in the following aspects: model information of the major VLMs developed over the past five years (2019-2024); the main architectures and training methods of these VLMs; summary and categorization of the popular benchmarks and evaluation metrics of VLMs; the applications of VLMs including embodied agents, robotics, and video generation; the challenges and issues faced by current VLMs such as hallucination, fairness, and safety. Detailed collections including papers and model repository links are listed in this https URL.
Submission history
From: Hongyang Du [view email]
[v1]
Sat, 4 Jan 2025 04:59:33 UTC (1,722 KB)
[v2]
Fri, 10 Jan 2025 17:43:10 UTC (1,661 KB)
Source link
lol