View a PDF of the paper titled Language Ranker: A Metric for Quantifying LLM Performance Across High and Low-Resource Languages, by Zihao Li and 6 other authors
Abstract:The development of Large Language Models (LLMs) relies on extensive text corpora, which are often unevenly distributed across languages. This imbalance results in LLMs performing significantly better on high-resource languages like English, German, and French, while their capabilities in low-resource languages remain inadequate. Currently, there is a lack of quantitative methods to evaluate the performance of LLMs in these low-resource languages. To address this gap, we propose the Language Ranker, an intrinsic metric designed to benchmark and rank languages based on LLM performance using internal representations. By comparing the LLM’s internal representation of various languages against a baseline derived from English, we can assess the model’s multilingual capabilities in a robust and language-agnostic manner. Our analysis reveals that high-resource languages exhibit higher similarity scores with English, demonstrating superior performance, while low-resource languages show lower similarity scores, underscoring the effectiveness of our metric in assessing language-specific capabilities. Besides, the experiments show that there is a strong correlation between the LLM’s performance in different languages and the proportion of those languages in its pre-training corpus. These insights underscore the efficacy of the Language Ranker as a tool for evaluating LLM performance across different languages, particularly those with limited resources.
Submission history
From: Mengnan Du [view email]
[v1]
Wed, 17 Apr 2024 16:53:16 UTC (12,004 KB)
[v2]
Sun, 16 Jun 2024 08:24:32 UTC (8,400 KB)
[v3]
Wed, 11 Dec 2024 09:04:18 UTC (9,705 KB)
Source link
lol