Are Large Language Models Really Good Logical Reasoners? A Comprehensive Evaluation and Beyond

AmazUtah_NLP at SemEval-2024 Task 9: A MultiChoice Question Answering System for Commonsense Defying Reasoning


View a PDF of the paper titled Are Large Language Models Really Good Logical Reasoners? A Comprehensive Evaluation and Beyond, by Fangzhi Xu and 5 other authors

View PDF
HTML (experimental)

Abstract:Logical reasoning consistently plays a fundamental and significant role in the domains of knowledge engineering and artificial intelligence. Recently, Large Language Models (LLMs) have emerged as a noteworthy innovation in natural language processing (NLP). However, the question of whether LLMs can effectively address the task of logical reasoning, which requires gradual cognitive inference similar to human intelligence, remains unanswered. To this end, we aim to bridge this gap and provide comprehensive evaluations in this paper. Firstly, to offer systematic evaluations, we select fifteen typical logical reasoning datasets and organize them into deductive, inductive, abductive and mixed-form reasoning settings. Considering the comprehensiveness of evaluations, we include 3 early-era representative LLMs and 4 trending LLMs. Secondly, different from previous evaluations relying only on simple metrics (e.g., emph{accuracy}), we propose fine-level evaluations in objective and subjective manners, covering both answers and explanations, including emph{answer correctness}, emph{explain correctness}, emph{explain completeness} and emph{explain redundancy}. Additionally, to uncover the logical flaws of LLMs, problematic cases will be attributed to five error types from two dimensions, i.e., emph{evidence selection process} and emph{reasoning process}. Thirdly, to avoid the influences of knowledge bias and concentrate purely on benchmarking the logical reasoning capability of LLMs, we propose a new dataset with neutral content. Based on the in-depth evaluations, this paper finally forms a general evaluation scheme of logical reasoning capability from six dimensions (i.e., emph{Correct}, emph{Rigorous}, emph{Self-aware}, emph{Active}, emph{Oriented} and emph{No hallucination}). It reflects the pros and cons of LLMs and gives guiding directions for future works.

Submission history

From: Fangzhi Xu [view email]
[v1]
Fri, 16 Jun 2023 13:39:35 UTC (10,997 KB)
[v2]
Tue, 11 Jul 2023 13:41:20 UTC (11,917 KB)
[v3]
Tue, 8 Aug 2023 12:57:18 UTC (11,812 KB)
[v4]
Sun, 15 Sep 2024 07:49:32 UTC (12,112 KB)



Source link
lol

By stp2y

Leave a Reply

Your email address will not be published. Required fields are marked *

No widgets found. Go to Widget page and add the widget in Offcanvas Sidebar Widget Area.