View a PDF of the paper titled Generalization on the Unseen, Logic Reasoning and Degree Curriculum, by Emmanuel Abbe and 3 other authors
Abstract:This paper considers the learning of logical (Boolean) functions with a focus on the generalization on the unseen (GOTU) setting, a strong case of out-of-distribution generalization. This is motivated by the fact that the rich combinatorial nature of data in certain reasoning tasks (e.g., arithmetic/logic) makes representative data sampling challenging, and learning successfully under GOTU gives a first vignette of an ‘extrapolating’ or ‘reasoning’ learner. We study how different network architectures trained by (S)GD perform under GOTU and provide both theoretical and experimental evidence that for sparse functions and a class of network models including instances of Transformers, random features models, and linear networks, a min-degree-interpolator is learned on the unseen. More specifically, this means an interpolator of the training data that has minimal Fourier mass on the higher degree basis elements. These findings lead to two implications: (1) we provide an explanation to the length generalization problem for Boolean functions (e.g., Anil et al. 2022); (2) we introduce a curriculum learning algorithm called Degree-Curriculum that learns monomials more efficiently by incrementing supports. Finally, we discuss extensions to other models or non-sparse regimes where the min-degree bias may still occur or fade, as well as how it can be potentially corrected when undesirable.
Submission history
From: Aryo Lotfi [view email]
[v1]
Mon, 30 Jan 2023 17:44:05 UTC (767 KB)
[v2]
Wed, 28 Jun 2023 15:41:49 UTC (1,012 KB)
[v3]
Wed, 20 Nov 2024 17:16:01 UTC (494 KB)
Source link
lol