Assessing and Enhancing the Robustness of Large Language Models with Task Structure Variations for Logical Reasoning

AmazUtah_NLP at SemEval-2024 Task 9: A MultiChoice Question Answering System for Commonsense Defying Reasoning


View a PDF of the paper titled Assessing and Enhancing the Robustness of Large Language Models with Task Structure Variations for Logical Reasoning, by Qiming Bao and 7 other authors

View PDF
HTML (experimental)

Abstract:Large language models (LLMs), such as LLaMA, Alpaca, Vicuna, GPT-3.5 and GPT-4, have advanced the performance of AI systems on various natural language processing tasks to human-like levels. However, their generalisation and robustness when performing logical reasoning has not been sufficiently assessed. To comprehensively evaluate this ability, we develop three new logical reasoning datasets named “ReClor-plus”, “LogiQA-plus” and “LogiQAv2-plus” that extend standard logical reasoning datasets to evaluate the robustness of the LLM’s reasoning. For each, we create three subsets: the first with randomly shuffled options, the second with the correct choices replaced by “none of the other options is correct”, and the third with a combination of shuffling and substitution. Experiments on these datasets show that these simple augmentations greatly hinder the models’ performance. Despite their high performance on the original publicly available datasets, we find that all models perform poorly on these newly constructed datasets. We also demonstrate that introducing task variations into the training set can markedly improve the model’s performance on both the original and our developed datasets. Finally, we show that applying logic-driven data augmentation for fine-tuning and prompting can enhance generalisation in both discriminative and generative models, offering a path to improving their robustness for tasks involving logical reasoning. Source code and data are made publicly available at this https URL.

Submission history

From: Qiming Bao [view email]
[v1]
Fri, 13 Oct 2023 22:29:15 UTC (7,827 KB)
[v2]
Tue, 17 Oct 2023 02:08:24 UTC (7,827 KB)
[v3]
Wed, 18 Oct 2023 22:46:12 UTC (7,827 KB)
[v4]
Sat, 30 Mar 2024 09:49:19 UTC (89 KB)
[v5]
Fri, 17 Jan 2025 04:39:38 UTC (70 KB)



Source link
lol

By stp2y

Leave a Reply

Your email address will not be published. Required fields are marked *

No widgets found. Go to Widget page and add the widget in Offcanvas Sidebar Widget Area.