跨語言品質評判:基於語言模型的多語言預訓練數據過濾方法
Judging Quality Across Languages: A Multilingual Approach to Pretraining Data Filtering with Language Models
May 28, 2025
作者: Mehdi Ali, Manuel Brack, Max Lübbering, Elias Wendt, Abbas Goher Khan, Richard Rutmann, Alex Jude, Maurice Kraus, Alexander Arno Weber, Felix Stollenwerk, David Kaczér, Florian Mai, Lucie Flek, Rafet Sifa, Nicolas Flores-Herr, Joachim Köhler, Patrick Schramowski, Michael Fromm, Kristian Kersting
cs.AI
摘要
高品質的多語言訓練數據對於有效預訓練大型語言模型(LLMs)至關重要。然而,合適的開源多語言數據集的可用性仍然有限。現有的最先進數據集大多依賴於啟發式過濾方法,這既限制了它們的跨語言遷移能力,也限制了其可擴展性。在此,我們介紹了JQL,這是一種系統化的方法,能夠高效地大規模策劃多樣化且高質量的多語言數據,同時顯著降低計算需求。JQL將LLMs的註釋能力提煉為基於預訓練多語言嵌入的輕量級註釋器。這些模型即使在訓練過程中未見過的語言和文字上,也展現出強大的多語言和跨語言性能。在35種語言上的實證評估表明,由此產生的註釋管道顯著優於當前的啟發式過濾方法,如Fineweb2。JQL顯著提升了下游模型訓練的質量,並提高了數據保留率。我們的研究為多語言數據策劃提供了實用的見解和寶貴的資源,提升了多語言數據集開發的標準。
English
High-quality multilingual training data is essential for effectively
pretraining large language models (LLMs). Yet, the availability of suitable
open-source multilingual datasets remains limited. Existing state-of-the-art
datasets mostly rely on heuristic filtering methods, restricting both their
cross-lingual transferability and scalability. Here, we introduce JQL, a
systematic approach that efficiently curates diverse and high-quality
multilingual data at scale while significantly reducing computational demands.
JQL distills LLMs' annotation capabilities into lightweight annotators based on
pretrained multilingual embeddings. These models exhibit robust multilingual
and cross-lingual performance, even for languages and scripts unseen during
training. Evaluated empirically across 35 languages, the resulting annotation
pipeline substantially outperforms current heuristic filtering methods like
Fineweb2. JQL notably enhances downstream model training quality and increases
data retention rates. Our research provides practical insights and valuable
resources for multilingual data curation, raising the standards of multilingual
dataset development.Summary
AI-Generated Summary