View a PDF of the paper titled PromptKD: Distilling Student-Friendly Knowledge for Generative Language Models via Prompt Tuning, by Gyeongman Kim and 2 other authors
Abstract:Recent advancements in large language models (LLMs) have raised concerns about inference costs, increasing the need for research into model compression. While knowledge distillation (KD) is a prominent method for this, research on KD for generative language models like LLMs is relatively sparse, and the approach of distilling student-friendly knowledge, which has shown promising performance in KD for classification models, remains unexplored in generative language models. To explore this approach, we propose PromptKD, a simple yet effective method that utilizes prompt tuning – for the first time in KD – to enable generative language models to transfer student-friendly knowledge. Unlike previous works in classification that require fine-tuning the entire teacher model for extracting student-friendly knowledge, PromptKD achieves similar effects by adding a small number of prompt tokens and tuning only the prompt with student guidance. Extensive experiments on instruction-following datasets show that PromptKD achieves state-of-the-art performance while adding only 0.0007% of the teacher’s parameters as prompts. Further analysis suggests that distilling student-friendly knowledge alleviates exposure bias effectively throughout the entire training process, leading to performance enhancements.
Submission history
From: Gyeongman Kim [view email]
[v1]
Tue, 20 Feb 2024 09:10:08 UTC (7,761 KB)
[v2]
Mon, 24 Jun 2024 05:40:38 UTC (7,779 KB)
[v3]
Fri, 27 Sep 2024 06:25:33 UTC (7,781 KB)
Source link
lol