View a PDF of the paper titled Pruning Multilingual Large Language Models for Multilingual Inference, by Hwichan Kim and 3 other authors
Abstract:Multilingual large language models (MLLMs), trained on multilingual balanced data, demonstrate better zero-shot learning performance in non-English languages compared to large language models trained on English-dominant data. However, the disparity in performance between English and non-English languages remains a challenge yet to be fully addressed. A distinctive characteristic of MLLMs is their high-quality translation capabilities, indicating an acquired proficiency in aligning between languages. This study explores how to enhance the zero-shot performance of MLLMs in non-English languages by leveraging their alignment capability between English and non-English languages. To achieve this, we first analyze the behavior of MLLMs when performing translation and reveal that there are large magnitude features that play a critical role in the translation process. Inspired by these findings, we retain the weights associated with operations involving the large magnitude features and prune other weights to force MLLMs to rely on these features for tasks beyond translation. We empirically demonstrate that this pruning strategy can enhance the MLLMs’ performance in non-English language.
Submission history
From: Hwichan Kim [view email]
[v1]
Wed, 25 Sep 2024 13:15:50 UTC (3,269 KB)
[v2]
Wed, 2 Oct 2024 07:52:56 UTC (3,273 KB)
Source link
lol