Limits of Transformer Language Models on Learning to Compose Algorithms

AmazUtah_NLP at SemEval-2024 Task 9: A MultiChoice Question Answering System for Commonsense Defying Reasoning


View a PDF of the paper titled Limits of Transformer Language Models on Learning to Compose Algorithms, by Jonathan Thomm and 5 other authors

View PDF
HTML (experimental)

Abstract:We analyze the capabilities of Transformer language models in learning compositional discrete tasks. To this end, we evaluate training LLaMA models and prompting GPT-4 and Gemini on four tasks demanding to learn a composition of several discrete sub-tasks. In particular, we measure how well these models can reuse primitives observable in the sub-tasks to learn the composition task. Our results indicate that compositional learning in state-of-the-art Transformer language models is highly sample inefficient: LLaMA requires more data samples than relearning all sub-tasks from scratch to learn the compositional task; in-context prompting with few samples is unreliable and fails at executing the sub-tasks or correcting the errors in multi-round code generation. Further, by leveraging complexity theory, we support these findings with a theoretical analysis focused on the sample inefficiency of gradient descent in memorizing feedforward models. We open source our code at this https URL.

Submission history

From: Jonathan Thomm [view email]
[v1]
Thu, 8 Feb 2024 16:23:29 UTC (184 KB)
[v2]
Tue, 13 Feb 2024 07:36:40 UTC (184 KB)
[v3]
Sat, 25 May 2024 11:09:28 UTC (1,143 KB)
[v4]
Wed, 9 Oct 2024 03:43:34 UTC (1,143 KB)
[v5]
Tue, 5 Nov 2024 06:32:38 UTC (973 KB)



Source link
lol

By stp2y

Leave a Reply

Your email address will not be published. Required fields are marked *

No widgets found. Go to Widget page and add the widget in Offcanvas Sidebar Widget Area.