View a PDF of the paper titled Exploring and Enhancing the Transfer of Distribution in Knowledge Distillation for Autoregressive Language Models, by Jun Rao and 6 other authors
Abstract:Knowledge distillation (KD) is a technique that compresses large teacher models by training smaller student models to mimic them. The success of KD in auto-regressive language models mainly relies on Reverse KL for mode-seeking and student-generated output (SGO) to combat exposure bias. Our theoretical analyses and experimental validation reveal that while Reverse KL effectively mimics certain features of the teacher distribution, it fails to capture most of its behaviors. Conversely, SGO incurs higher computational costs and presents challenges in optimization, particularly when the student model is significantly smaller than the teacher model. These constraints are primarily due to the immutable distribution of the teacher model, which fails to adjust adaptively to models of varying sizes. We introduce Online Knowledge Distillation (OKD), where the teacher network integrates small online modules to concurrently train with the student model. This strategy abolishes the necessity for on-policy sampling and merely requires minimal updates to the parameters of the teacher’s online module during training, thereby allowing dynamic adaptation to the student’s distribution to make distillation better. Extensive results across multiple generation datasets show that OKD achieves or exceeds the performance of leading methods in various model architectures and sizes, reducing training time by up to fourfold.
Submission history
From: Jun Rao [view email]
[v1]
Thu, 19 Sep 2024 07:05:26 UTC (2,796 KB)
[v2]
Fri, 20 Sep 2024 08:35:45 UTC (2,796 KB)
Source link
lol