View a PDF of the paper titled KaLM-Embedding: Superior Training Data Brings A Stronger Embedding Model, by Xinshuo Hu and 10 other authors
Abstract:As retrieval-augmented generation prevails in large language models, embedding models are becoming increasingly crucial. Despite the growing number of general embedding models, prior work often overlooks the critical role of training data quality. In this work, we introduce KaLM-Embedding, a general multilingual embedding model that leverages a large quantity of cleaner, more diverse, and domain-specific training data. Our model has been trained with key techniques proven to enhance performance: (1) persona-based synthetic data to create diversified examples distilled from LLMs, (2) ranking consistency filtering to remove less informative samples, and (3) semi-homogeneous task batch sampling to improve training efficacy. Departing from traditional BERT-like architectures, we adopt Qwen2-0.5B as the pre-trained model, facilitating the adaptation of auto-regressive language models for general embedding tasks. Extensive evaluations of the MTEB benchmark across multiple languages show that our model outperforms others of comparable size, setting a new standard for multilingual embedding models with <1B parameters.
Submission history
From: Xinshuo Hu [view email]
[v1]
Thu, 2 Jan 2025 03:17:51 UTC (444 KB)
[v2]
Fri, 3 Jan 2025 03:16:10 UTC (444 KB)
Source link
lol