[Submitted on 7 Jan 2025]
View a PDF of the paper titled Fixed Points of Deep Neural Networks: Emergence, Stability, and Applications, by L. Berlyand and 1 other authors
Abstract:We present numerical and analytical results on the formation and stability of a family of fixed points of deep neural networks (DNNs). Such fixed points appear in a class of DNNs when dimensions of input and output vectors are the same. We demonstrate examples of applications of such networks in supervised, semi-supervised and unsupervised learning such as encoding/decoding of images, restoration of damaged images among others.
We present several numerical and analytical results. First, we show that for untrained DNN’s with weights and biases initialized by normally distributed random variables the only one fixed point exists. This result holds for DNN with any depth (number of layers) $L$, any layer width $N$, and sigmoid-type activation functions. Second, it has been shown that for a DNN whose parameters (weights and biases) are initialized by “light-tailed” distribution of weights (e.g. normal distribution), after training the distribution of these parameters become “heavy-tailed”. This motivates our study of DNNs with “heavy-tailed” initialization. For such DNNs we show numerically %existence and stability that training leads to emergence of $Q(N,L)$ fixed points, where $Q(N,L)$ is a positive integer which depends on the number of layers $L$ and layer width $N$. We further observe numerically that for fixed $N = N_0$ the function $Q(N_0, L)$ is non-monotone, that is it initially grows as $L$ increases and then decreases to 1.
This non-monotone behavior of $Q(N_0, L)$ is also obtained by analytical derivation of equation for Empirical Spectral Distribution (ESD) of input-output Jacobian followed by numerical solution of this equation.
Submission history
From: Victor Slavin V [view email]
[v1]
Tue, 7 Jan 2025 23:23:26 UTC (1,640 KB)
Source link
lol