OccamLLM: Fast and Exact Language Model Arithmetic in a Single Step

AmazUtah_NLP at SemEval-2024 Task 9: A MultiChoice Question Answering System for Commonsense Defying Reasoning


View a PDF of the paper titled OccamLLM: Fast and Exact Language Model Arithmetic in a Single Step, by Owen Dugan and 5 other authors

View PDF
HTML (experimental)

Abstract:Despite significant advancements in text generation and reasoning, Large Language Models (LLMs) still face challenges in accurately performing complex arithmetic operations. Language model systems often enable LLMs to generate code for arithmetic operations to achieve accurate calculations. However, this approach compromises speed and security, and fine-tuning risks the language model losing prior capabilities. We propose a framework that enables exact arithmetic in a single autoregressive step, providing faster, more secure, and more interpretable LLM systems with arithmetic capabilities. We use the hidden states of a LLM to control a symbolic architecture that performs arithmetic. Our implementation using Llama 3 with OccamNet as a symbolic model (OccamLlama) achieves 100% accuracy on single arithmetic operations ($+,-,times,รท,sin{},cos{},log{},exp{},sqrt{}$), outperforming GPT 4o with and without a code interpreter. Furthermore, OccamLlama outperforms GPT 4o with and without a code interpreter on average across a range of mathematical problem solving benchmarks, demonstrating that OccamLLMs can excel in arithmetic tasks, even surpassing much larger models. We will make our code public shortly.

Submission history

From: Owen Dugan [view email]
[v1]
Tue, 4 Jun 2024 04:17:40 UTC (3,501 KB)
[v2]
Tue, 18 Jun 2024 17:51:42 UTC (3,341 KB)
[v3]
Sat, 29 Jun 2024 19:13:23 UTC (3,349 KB)
[v4]
Tue, 3 Sep 2024 02:11:01 UTC (3,376 KB)



Source link
lol

By stp2y

Leave a Reply

Your email address will not be published. Required fields are marked *

No widgets found. Go to Widget page and add the widget in Offcanvas Sidebar Widget Area.