View a PDF of the paper titled 4+3 Phases of Compute-Optimal Neural Scaling Laws, by Elliot Paquette and 3 other authors
Abstract:We consider the solvable neural scaling model with three parameters: data complexity, target complexity, and model-parameter-count. We use this neural scaling model to derive new predictions about the compute-limited, infinite-data scaling law regime. To train the neural scaling model, we run one-pass stochastic gradient descent on a mean-squared loss. We derive a representation of the loss curves which holds over all iteration counts and improves in accuracy as the model parameter count grows. We then analyze the compute-optimal model-parameter-count, and identify 4 phases (+3 subphases) in the data-complexity/target-complexity phase-plane. The phase boundaries are determined by the relative importance of model capacity, optimizer noise, and embedding of the features. We furthermore derive, with mathematical proof and extensive numerical evidence, the scaling-law exponents in all of these phases, in particular computing the optimal model-parameter-count as a function of floating point operation budget.
Submission history
From: Courtney Paquette [view email]
[v1]
Thu, 23 May 2024 21:50:54 UTC (26,499 KB)
[v2]
Sun, 17 Nov 2024 01:57:32 UTC (28,353 KB)
Source link
lol