Visual Error Patterns in Multi-Modal AI: A Statistical Approach

AmazUtah_NLP at SemEval-2024 Task 9: A MultiChoice Question Answering System for Commonsense Defying Reasoning


View a PDF of the paper titled Visual Error Patterns in Multi-Modal AI: A Statistical Approach, by Ching-Yi Wang

View PDF
HTML (experimental)

Abstract:Multi-modal large language models (MLLMs), such as GPT-4o, excel at integrating text and visual data but face systematic challenges when interpreting ambiguous or incomplete visual stimuli. This study leverages statistical modeling to analyze the factors driving these errors, using a dataset of geometric stimuli characterized by features like 3D, rotation, and missing face/side. We applied parametric methods, non-parametric methods, and ensemble techniques to predict classification errors, with the non-linear gradient boosting model achieving the highest performance (AUC=0.85) during cross-validation. Feature importance analysis highlighted difficulties in depth perception and reconstructing incomplete structures as key contributors to misclassification. These findings demonstrate the effectiveness of statistical approaches for uncovering limitations in MLLMs and offer actionable insights for enhancing model architectures by integrating contextual reasoning mechanisms.

Submission history

From: Ching-Yi Wang [view email]
[v1]
Wed, 27 Nov 2024 01:20:08 UTC (1,802 KB)
[v2]
Wed, 4 Dec 2024 23:27:34 UTC (1,803 KB)
[v3]
Fri, 6 Dec 2024 02:01:54 UTC (1,803 KB)



Source link
lol

By stp2y

Leave a Reply

Your email address will not be published. Required fields are marked *

No widgets found. Go to Widget page and add the widget in Offcanvas Sidebar Widget Area.