View a PDF of the paper titled On Exact Bit-level Reversible Transformers Without Changing Architectures, by Guoqiang Zhang and J.P. Lewis and W. B. Kleijn
Abstract:Various reversible deep neural networks (DNN) models have been proposed to reduce memory consumption in the training process. However, almost all existing reversible DNNs either require special non-standard architectures or are constructed by modifying existing DNN architectures considerably to enable reversibility. In this work we present the BDIA-transformer, which is an exact bit-level reversible transformer that uses an unchanged standard architecture for inference. The basic idea is to first treat each transformer block as the Euler integration approximation for solving an ordinary differential equation (ODE) and then incorporate the technique of bidirectional integration approximation (BDIA) into the neural architecture, together with activation quantization to make it exactly bit-level reversible. In the training process, we let a hyper-parameter $gamma$ in BDIA-transformer randomly take one of the two values ${0.5, -0.5}$ per training sample per transformer block for averaging every two consecutive integration approximations. As a result, BDIA-transformer can be viewed as training an ensemble of ODE solvers parameterized by a set of binary random variables, which regularizes the model and results in improved validation accuracy. Lightweight side information per transformer block is required to be stored in the forward process to account for binary quantization loss to enable exact bit-level reversibility. In the inference procedure, the expectation $mathbb{E}(gamma)=0$ is taken to make the resulting architectures of BDIA-transformer identical to transformers up to activation quantization. Our experiments in both image classification and language translation show that BDIA-transformers outperform their conventional counterparts significantly in terms of validation performance while also requiring considerably less training memory.
Submission history
From: Guoqiang Zhang [view email]
[v1]
Fri, 12 Jul 2024 08:42:58 UTC (135 KB)
[v2]
Sat, 5 Oct 2024 11:17:45 UTC (207 KB)
Source link
lol