View a PDF of the paper titled Norm of Mean Contextualized Embeddings Determines their Variance, by Hiroaki Yamagiwa and 1 other authors
Abstract:Contextualized embeddings vary by context, even for the same token, and form a distribution in the embedding space. To analyze this distribution, we focus on the norm of the mean embedding and the variance of the embeddings. In this study, we first demonstrate that these values follow the well-known formula for variance in statistics and provide an efficient sequential computation method. Then, by observing embeddings from intermediate layers of several Transformer models, we found a strong trade-off relationship between the norm and the variance: as the mean embedding becomes closer to the origin, the variance increases. This trade-off is likely influenced by the layer normalization mechanism used in Transformer models. Furthermore, when the sets of token embeddings are treated as clusters, we show that the variance of the entire embedding set can theoretically be decomposed into the within-cluster variance and the between-cluster variance. We found experimentally that as the layers of Transformer models deepen, the embeddings move farther from the origin, the between-cluster variance relatively decreases, and the within-cluster variance relatively increases. These results are consistent with existing studies on the anisotropy of the embedding spaces across layers.
Submission history
From: Hiroaki Yamagiwa [view email]
[v1]
Tue, 17 Sep 2024 15:02:23 UTC (11,188 KB)
[v2]
Tue, 17 Dec 2024 07:07:52 UTC (13,373 KB)
Source link
lol