[Submitted on 25 Nov 2024]
View a PDF of the paper titled Probing the limitations of multimodal language models for chemistry and materials research, by Nawaf Alampara and 7 other authors
Abstract:Recent advancements in artificial intelligence have sparked interest in scientific assistants that could support researchers across the full spectrum of scientific workflows, from literature review to experimental design and data analysis. A key capability for such systems is the ability to process and reason about scientific information in both visual and textual forms – from interpreting spectroscopic data to understanding laboratory setups. Here, we introduce MaCBench, a comprehensive benchmark for evaluating how vision-language models handle real-world chemistry and materials science tasks across three core aspects: data extraction, experimental understanding, and results interpretation. Through a systematic evaluation of leading models, we find that while these systems show promising capabilities in basic perception tasks – achieving near-perfect performance in equipment identification and standardized data extraction – they exhibit fundamental limitations in spatial reasoning, cross-modal information synthesis, and multi-step logical inference. Our insights have important implications beyond chemistry and materials science, suggesting that developing reliable multimodal AI scientific assistants may require advances in curating suitable training data and approaches to training those models.
Submission history
From: Kevin Maik Jablonka [view email]
[v1]
Mon, 25 Nov 2024 21:51:45 UTC (20,804 KB)
Source link
lol