[Submitted on 9 Dec 2024]
View a PDF of the paper titled AutoReason: Automatic Few-Shot Reasoning Decomposition, by Arda Sevinc and Abdurrahman Gumus
Abstract:Chain of Thought (CoT) was introduced in recent research as a method for improving step-by-step reasoning in Large Language Models. However, CoT has limited applications such as its need for hand-crafted few-shot exemplar prompts and no capability to adjust itself to different queries.
In this work, we propose a system to automatically generate rationales using CoT. Our method improves multi-step implicit reasoning capabilities by decomposing the implicit query into several explicit questions. This provides interpretability for the model, improving reasoning in weaker LLMs. We test our approach with two Q&A datasets: StrategyQA and HotpotQA. We show an increase in accuracy with both, especially on StrategyQA.
To facilitate further research in this field, the complete source code for this study has been made publicly available on GitHub: this https URL.
Submission history
From: Abdurrahman Gumus [view email]
[v1]
Mon, 9 Dec 2024 20:35:39 UTC (6,311 KB)
Source link
lol