Reasoning Abilities of Large Language Models: In-Depth Analysis on the Abstraction and Reasoning Corpus

AmazUtah_NLP at SemEval-2024 Task 9: A MultiChoice Question Answering System for Commonsense Defying Reasoning


View a PDF of the paper titled Reasoning Abilities of Large Language Models: In-Depth Analysis on the Abstraction and Reasoning Corpus, by Seungpil Lee and Woochang Sim and Donghyeon Shin and Wongyu Seo and Jiwon Park and Seokki Lee and Sanha Hwang and Sejin Kim and Sundong Kim

View PDF
HTML (experimental)

Abstract:The existing methods for evaluating the inference abilities of Large Language Models (LLMs) have been results-centric, making it difficult to assess the inference process. We introduce a new approach using the Abstraction and Reasoning Corpus (ARC) dataset to evaluate the inference and contextual understanding abilities of large language models in a process-centric manner. ARC demands rigorous logical structures for problem-solving, making it a benchmark that facilitates the comparison of model inference abilities with humans. Experimental results confirm that while large language models possess weak inference abilities, they still lag in terms of logical coherence, compositionality, and productivity. Our experiments highlight the reasoning capabilities of LLMs, proposing development paths for achieving human-level reasoning.

Submission history

From: Sundong Kim [view email]
[v1]
Mon, 18 Mar 2024 13:50:50 UTC (8,423 KB)
[v2]
Thu, 12 Sep 2024 23:08:08 UTC (2,674 KB)



Source link
lol

By stp2y

Leave a Reply

Your email address will not be published. Required fields are marked *

No widgets found. Go to Widget page and add the widget in Offcanvas Sidebar Widget Area.