Piecing It All Together: Verifying Multi-Hop Multimodal Claims

Every’s Master Plan


View a PDF of the paper titled Piecing It All Together: Verifying Multi-Hop Multimodal Claims, by Haoran Wang and 6 other authors

View PDF
HTML (experimental)

Abstract:Existing claim verification datasets often do not require systems to perform complex reasoning or effectively interpret multimodal evidence. To address this, we introduce a new task: multi-hop multimodal claim verification. This task challenges models to reason over multiple pieces of evidence from diverse sources, including text, images, and tables, and determine whether the combined multimodal evidence supports or refutes a given claim. To study this task, we construct MMCV, a large-scale dataset comprising 15k multi-hop claims paired with multimodal evidence, generated and refined using large language models, with additional input from human feedback. We show that MMCV is challenging even for the latest state-of-the-art multimodal large language models, especially as the number of reasoning hops increases. Additionally, we establish a human performance benchmark on a subset of MMCV. We hope this dataset and its evaluation task will encourage future research in multimodal multi-hop claim verification.

Submission history

From: Haoran Wang [view email]
[v1]
Thu, 14 Nov 2024 16:01:33 UTC (22,458 KB)
[v2]
Thu, 12 Dec 2024 19:23:28 UTC (22,437 KB)



Source link
lol

By stp2y

Leave a Reply

Your email address will not be published. Required fields are marked *

No widgets found. Go to Widget page and add the widget in Offcanvas Sidebar Widget Area.