Extending LLMs to New Languages: A Case Study of Llama and Persian Adaptation

Enhancing GitHub Actions CI for FastAPI: Build, Test, and Publish - PyImageSearch


[Submitted on 17 Dec 2024]

View a PDF of the paper titled Extending LLMs to New Languages: A Case Study of Llama and Persian Adaptation, by Samin Mahdizadeh Sani and 4 other authors

View PDF
HTML (experimental)

Abstract:Large language models (LLMs) have made great progress in classification and text generation tasks. However, they are mainly trained on English data and often struggle with low-resource languages. In this study, we explore adding a new language, i.e., Persian, to Llama (a model with a limited understanding of Persian) using parameter-efficient fine-tuning. We employ a multi-stage approach involving pretraining on monolingual Persian data, aligning representations through bilingual pretraining and instruction datasets, and instruction-tuning with task-specific datasets. We evaluate the model’s performance at each stage on generation and classification tasks. Our findings suggest that incorporating the Persian language, through bilingual data alignment, can enhance classification accuracy for Persian tasks, with no adverse impact and sometimes even improvements on English tasks. Additionally, the results highlight the model’s initial strength as a critical factor when working with limited training data, with cross-lingual alignment offering minimal benefits for the low-resource language. Knowledge transfer from English to Persian has a marginal effect, primarily benefiting simple classification tasks.

Submission history

From: Samin Mahdizadeh Sani [view email]
[v1]
Tue, 17 Dec 2024 23:18:06 UTC (2,747 KB)



Source link
lol

By stp2y

Leave a Reply

Your email address will not be published. Required fields are marked *

No widgets found. Go to Widget page and add the widget in Offcanvas Sidebar Widget Area.