View a PDF of the paper titled Ladder: A Model-Agnostic Framework Boosting LLM-based Machine Translation to the Next Level, by Zhaopeng Feng and 4 other authors
Abstract:General-purpose Large Language Models (LLMs) like GPT-4 have achieved remarkable advancements in machine translation (MT) by leveraging extensive web content. On the other hand, translation-specific LLMs are built by pre-training on domain-specific monolingual corpora and fine-tuning with human-annotated translation data. Despite the superior performance, these methods either demand an unprecedented scale of computing and data or substantial human editing and annotation efforts. In this paper, we develop MT-Ladder, a novel model-agnostic and cost-effective tool to refine the performance of general LLMs for MT. MT-Ladder is trained on pseudo-refinement triplets which can be easily obtained from existing LLMs without additional human cost. During training, we propose a hierarchical fine-tuning strategy with an easy-to-hard schema, improving MT-Ladder’s refining performance progressively. The trained MT-Ladder can be seamlessly integrated with any general-purpose LLMs to boost their translation performance. By utilizing Gemma-2B/7B as the backbone, MT-Ladder-2B can elevate raw translations to the level of top-tier open-source models (e.g., refining BigTranslate-13B with +6.91 BLEU and +3.52 COMET for XX-En), and MT-Ladder-7B can further enhance model performance to be on par with the state-of-the-art GPT-4. Extensive ablation and analysis corroborate the effectiveness of MT-Ladder in diverse settings. Our code is available at this https URL
Submission history
From: Zhaopeng Feng [view email]
[v1]
Sat, 22 Jun 2024 05:33:35 UTC (1,495 KB)
[v2]
Fri, 9 Aug 2024 08:06:39 UTC (2,842 KB)
[v3]
Tue, 29 Oct 2024 05:15:09 UTC (2,885 KB)
Source link
lol