View a PDF of the paper titled An Empirical Study on Cross-lingual Vocabulary Adaptation for Efficient Language Model Inference, by Atsuki Yamaguchi and 2 other authors
Abstract:The development of state-of-the-art generative large language models (LLMs) disproportionately relies on English-centric tokenizers, vocabulary and pre-training data. Despite the fact that some LLMs have multilingual capabilities, recent studies have shown that their inference efficiency deteriorates when generating text in languages other than English. This results in increased inference time and costs. Cross-lingual vocabulary adaptation (CVA) methods have been proposed for adapting models to a target language aiming to improve downstream performance. However, the effectiveness of these methods on increasing inference efficiency of generative LLMs has yet to be explored. In this paper, we perform an empirical study of five CVA methods on four generative LLMs (including monolingual and multilingual models) across four typologically-diverse languages and four natural language understanding tasks. We find that CVA substantially contributes to LLM inference speedups of up to 271.5%. We also show that adapting LLMs that have been pre-trained on more balanced multilingual data results in downstream performance comparable to the original models.
Submission history
From: Atsuki Yamaguchi [view email]
[v1]
Fri, 16 Feb 2024 14:15:15 UTC (634 KB)
[v2]
Mon, 17 Jun 2024 12:00:02 UTC (879 KB)
[v3]
Thu, 26 Sep 2024 11:15:14 UTC (880 KB)
Source link
lol