White-Box Transformers via Sparse Rate Reduction: Compression Is All There Is?

Strategies For Effective Prompt Engineering


View a PDF of the paper titled White-Box Transformers via Sparse Rate Reduction: Compression Is All There Is?, by Yaodong Yu and 9 other authors

View PDF

Abstract:In this paper, we contend that a natural objective of representation learning is to compress and transform the distribution of the data, say sets of tokens, towards a low-dimensional Gaussian mixture supported on incoherent subspaces. The goodness of such a representation can be evaluated by a principled measure, called sparse rate reduction, that simultaneously maximizes the intrinsic information gain and extrinsic sparsity of the learned representation. From this perspective, popular deep network architectures, including transformers, can be viewed as realizing iterative schemes to optimize this measure. Particularly, we derive a transformer block from alternating optimization on parts of this objective: the multi-head self-attention operator compresses the representation by implementing an approximate gradient descent step on the coding rate of the features, and the subsequent multi-layer perceptron sparsifies the features. This leads to a family of white-box transformer-like deep network architectures, named CRATE, which are mathematically fully interpretable. We show, by way of a novel connection between denoising and compression, that the inverse to the aforementioned compressive encoding can be realized by the same class of CRATE architectures. Thus, the so-derived white-box architectures are universal to both encoders and decoders. Experiments show that these networks, despite their simplicity, indeed learn to compress and sparsify representations of large-scale real-world image and text datasets, and achieve performance very close to highly engineered transformer-based models: ViT, MAE, DINO, BERT, and GPT2. We believe the proposed computational framework demonstrates great potential in bridging the gap between theory and practice of deep learning, from a unified perspective of data compression. Code is available at: this https URL .

Submission history

From: Druv Pai [view email]
[v1]
Wed, 22 Nov 2023 02:23:32 UTC (34,799 KB)
[v2]
Fri, 24 Nov 2023 09:18:44 UTC (34,799 KB)
[v3]
Tue, 3 Sep 2024 06:31:48 UTC (29,601 KB)
[v4]
Fri, 6 Sep 2024 07:40:40 UTC (29,466 KB)



Source link
lol

By stp2y

Leave a Reply

Your email address will not be published. Required fields are marked *

No widgets found. Go to Widget page and add the widget in Offcanvas Sidebar Widget Area.