The Benefits of a Concise Chain of Thought on Problem-Solving in Large Language Models

AmazUtah_NLP at SemEval-2024 Task 9: A MultiChoice Question Answering System for Commonsense Defying Reasoning


View a PDF of the paper titled The Benefits of a Concise Chain of Thought on Problem-Solving in Large Language Models, by Matthew Renze and Erhan Guven

View PDF
HTML (experimental)

Abstract:In this paper, we introduce Concise Chain-of-Thought (CCoT) prompting. We compared standard CoT and CCoT prompts to see how conciseness impacts response length and correct-answer accuracy. We evaluated this using GPT-3.5 and GPT-4 with a multiple-choice question-and-answer (MCQA) benchmark. CCoT reduced average response length by 48.70% for both GPT-3.5 and GPT-4 while having a negligible impact on problem-solving performance. However, on math problems, GPT-3.5 with CCoT incurs a performance penalty of 27.69%. Overall, CCoT leads to an average per-token cost reduction of 22.67%.

Submission history

From: Matthew Renze [view email]
[v1]
Thu, 11 Jan 2024 01:52:25 UTC (65 KB)
[v2]
Mon, 9 Sep 2024 23:54:35 UTC (58 KB)



Source link
lol

By stp2y

Leave a Reply

Your email address will not be published. Required fields are marked *

No widgets found. Go to Widget page and add the widget in Offcanvas Sidebar Widget Area.