Rule Extrapolation in Language Models: A Study of Compositional Generalization on OOD Prompts

Every’s Master Plan


View a PDF of the paper titled Rule Extrapolation in Language Models: A Study of Compositional Generalization on OOD Prompts, by Anna M’esz’aros and 4 other authors

View PDF

Abstract:LLMs show remarkable emergent abilities, such as inferring concepts from presumably out-of-distribution prompts, known as in-context learning. Though this success is often attributed to the Transformer architecture, our systematic understanding is limited. In complex real-world data sets, even defining what is out-of-distribution is not obvious. To better understand the OOD behaviour of autoregressive LLMs, we focus on formal languages, which are defined by the intersection of rules. We define a new scenario of OOD compositional generalization, termed rule extrapolation. Rule extrapolation describes OOD scenarios, where the prompt violates at least one rule. We evaluate rule extrapolation in formal languages with varying complexity in linear and recurrent architectures, the Transformer, and state space models to understand the architectures’ influence on rule extrapolation. We also lay the first stones of a normative theory of rule extrapolation, inspired by the Solomonoff prior in algorithmic information theory.

Submission history

From: Anna Mészáros [view email]
[v1]
Mon, 9 Sep 2024 22:36:35 UTC (641 KB)
[v2]
Thu, 24 Oct 2024 11:30:33 UTC (641 KB)



Source link
lol

By stp2y

Leave a Reply

Your email address will not be published. Required fields are marked *

No widgets found. Go to Widget page and add the widget in Offcanvas Sidebar Widget Area.