PreAlign: Boosting Cross-Lingual Transfer by Early Establishment of Multilingual Alignment

AmazUtah_NLP at SemEval-2024 Task 9: A MultiChoice Question Answering System for Commonsense Defying Reasoning


View a PDF of the paper titled PreAlign: Boosting Cross-Lingual Transfer by Early Establishment of Multilingual Alignment, by Jiahuan Li and 3 other authors

View PDF
HTML (experimental)

Abstract:Large language models demonstrate reasonable multilingual abilities, despite predominantly English-centric pretraining. However, the spontaneous multilingual alignment in these models is shown to be weak, leading to unsatisfactory cross-lingual transfer and knowledge sharing. Previous works attempt to address this issue by explicitly injecting multilingual alignment information during or after pretraining. Thus for the early stage in pretraining, the alignment is weak for sharing information or knowledge across languages. In this paper, we propose PreAlign, a framework that establishes multilingual alignment prior to language model pretraining. PreAlign injects multilingual alignment by initializing the model to generate similar representations of aligned words and preserves this alignment using a code-switching strategy during pretraining. Extensive experiments in a synthetic English to English-Clone setting demonstrate that PreAlign significantly outperforms standard multilingual joint training in language modeling, zero-shot cross-lingual transfer, and cross-lingual knowledge application. Further experiments in real-world scenarios further validate PreAlign’s effectiveness across various model sizes.

Submission history

From: Jiahuan Li [view email]
[v1]
Tue, 23 Jul 2024 06:59:53 UTC (9,475 KB)
[v2]
Fri, 4 Oct 2024 11:34:23 UTC (9,481 KB)



Source link
lol

By stp2y

Leave a Reply

Your email address will not be published. Required fields are marked *

No widgets found. Go to Widget page and add the widget in Offcanvas Sidebar Widget Area.