[Submitted on 8 Jan 2025]
View a PDF of the paper titled When LLMs Struggle: Reference-less Translation Evaluation for Low-resource Languages, by Archchana Sindhujan and 3 other authors
Abstract:This paper investigates the reference-less evaluation of machine translation for low-resource language pairs, known as quality estimation (QE). Segment-level QE is a challenging cross-lingual language understanding task that provides a quality score (0-100) to the translated output. We comprehensively evaluate large language models (LLMs) in zero/few-shot scenarios and perform instruction fine-tuning using a novel prompt based on annotation guidelines. Our results indicate that prompt-based approaches are outperformed by the encoder-based fine-tuned QE models. Our error analysis reveals tokenization issues, along with errors due to transliteration and named entities, and argues for refinement in LLM pre-training for cross-lingual tasks. We release the data, and models trained publicly for further research.
Submission history
From: Archchana Sindhujan [view email]
[v1]
Wed, 8 Jan 2025 12:54:05 UTC (18,095 KB)
Source link
lol