View a PDF of the paper titled Xmodel-1.5: An 1B-scale Multilingual LLM, by Wang Qun and 3 other authors
Abstract:We introduce Xmodel-1.5, a 1-billion-parameter multilingual large language model pretrained on 2 trillion tokens, designed for balanced performance and scalability. Unlike most large models that use the BPE tokenizer, Xmodel-1.5 employs a custom unigram tokenizer with 65,280 tokens, optimizing both efficiency and accuracy. The model delivers competitive results across multiple languages, including Thai, Arabic, French, Chinese, and English, outperforming Alibaba’s PolyLM-1.7B on respective evaluation datasets. Xmodel-1.5 excels in benchmarks like mMMLU and PIQA, and achieves state-of-the-art results in Thai. To support low-resource language research, we release Xdata_Thai, a Thai-specific evaluation dataset featuring unique linguistic challenges such as gendered particles and idioms. While the model demonstrates strong performance, there is still room for improvement in handling culturally specific nuances. We hope this work contributes to advancements in multilingual AI research. Models and code are publicly available on GitHub at this https URL
Submission history
From: Qun Wang [view email]
[v1]
Fri, 15 Nov 2024 10:01:52 UTC (2,361 KB)
[v2]
Fri, 22 Nov 2024 08:57:42 UTC (2,487 KB)
[v3]
Wed, 4 Dec 2024 11:49:04 UTC (2,487 KB)
Source link
lol