SciAssess: Benchmarking LLM Proficiency in Scientific Literature Analysis

AmazUtah_NLP at SemEval-2024 Task 9: A MultiChoice Question Answering System for Commonsense Defying Reasoning


View a PDF of the paper titled SciAssess: Benchmarking LLM Proficiency in Scientific Literature Analysis, by Hengxing Cai and 22 other authors

View PDF
HTML (experimental)

Abstract:Recent breakthroughs in Large Language Models (LLMs) have revolutionized scientific literature analysis. However, existing benchmarks fail to adequately evaluate the proficiency of LLMs in this domain, particularly in scenarios requiring higher-level abilities beyond mere memorization and the handling of multimodal data. In response to this gap, we introduce SciAssess, a benchmark specifically designed for the comprehensive evaluation of LLMs in scientific literature analysis. It aims to thoroughly assess the efficacy of LLMs by evaluating their capabilities in Memorization (L1), Comprehension (L2), and Analysis & Reasoning (L3). It encompasses a variety of tasks drawn from diverse scientific fields, including biology, chemistry, material, and medicine. To ensure the reliability of SciAssess, rigorous quality control measures have been implemented, ensuring accuracy, anonymization, and compliance with copyright standards. SciAssess evaluates 11 LLMs, highlighting their strengths and areas for improvement. We hope this evaluation supports the ongoing development of LLM applications in scientific literature analysis. SciAssess and its resources are available at url{this https URL}.

Submission history

From: Hengxing Cai [view email]
[v1]
Mon, 4 Mar 2024 12:19:28 UTC (4,202 KB)
[v2]
Fri, 15 Mar 2024 13:27:31 UTC (8,174 KB)
[v3]
Sat, 15 Jun 2024 15:45:47 UTC (7,855 KB)
[v4]
Tue, 18 Jun 2024 05:45:33 UTC (7,855 KB)
[v5]
Fri, 18 Oct 2024 06:52:17 UTC (5,754 KB)



Source link
lol

By stp2y

Leave a Reply

Your email address will not be published. Required fields are marked *

No widgets found. Go to Widget page and add the widget in Offcanvas Sidebar Widget Area.