View a PDF of the paper titled An Empirical Study of Catastrophic Forgetting in Large Language Models During Continual Fine-tuning, by Yun Luo and Zhen Yang and Fandong Meng and Yafu Li and Jie Zhou and Yue Zhang
Abstract:Catastrophic forgetting (CF) is a phenomenon that occurs in machine learning when a model forgets previously learned information while acquiring new knowledge for achieving a satisfactory performance in downstream tasks. As large language models (LLMs) have demonstrated remarkable performance, it is intriguing to investigate whether CF exists during the continual instruction tuning of LLMs. This study empirically evaluates the forgetting phenomenon in LLMs’ knowledge during continual instruction tuning from the perspectives of domain knowledge, reasoning, and reading comprehension. The experiments reveal that catastrophic forgetting is generally observed in LLMs ranging from 1b to 7b parameters. Surprisingly, as the model scale increases, the severity of forgetting intensifies in such a model sale range which may result from the much significant initial performance in the larger LLM. Comparing the decoder-only model BLOOMZ with the encoder-decoder model mT0, BLOOMZ exhibits less forgetting and retains more knowledge. Interestingly, we also observe that LLMs can mitigate language biases, such as gender bias, during continual fine-tuning. Furthermore, our findings indicate that general instruction tuning can help alleviate the forgetting phenomenon in LLMs during subsequent fine-tuning.
Submission history
From: Yun Luo [view email]
[v1]
Thu, 17 Aug 2023 02:53:23 UTC (688 KB)
[v2]
Mon, 21 Aug 2023 08:18:24 UTC (688 KB)
[v3]
Tue, 2 Apr 2024 09:05:51 UTC (3,075 KB)
[v4]
Mon, 30 Dec 2024 12:32:49 UTC (3,095 KB)
Source link
lol