ChatPaper.aiChatPaper

Gaperon:一款融合英法特色的生成式语言模型套件

Gaperon: A Peppered English-French Generative Language Model Suite

October 29, 2025
作者: Nathan Godey, Wissam Antoun, Rian Touchent, Rachel Bawden, Éric de la Clergerie, Benoît Sagot, Djamé Seddah
cs.AI

摘要

我们正式发布Gaperon——一套完全开源的法语-英语-代码语言模型套件,旨在提升大规模模型训练的透明度与可复现性。该系列包含15亿、80亿及240亿参数模型,基于2-4万亿token训练而成,并完整公开训练流程所有要素:经神经质量分类器筛选的法英数据集、高效数据清洗与训练框架,以及数百个中间检查点。本研究深入探讨了数据过滤与数据污染如何共同影响基准测试与生成性能。我们发现:语言质量过滤能提升文本流畅度与连贯性,但会导致基准测试表现平庸;而后期刻意污染(在包含测试集的数据混合方案上继续训练)既可恢复竞争优势分数,又仅对生成质量产生可控损害。我们论证了常规神经过滤技术如何无意间加剧基准数据泄露。为支持深入研究,我们还在预训练阶段引入了无害数据投毒,为安全研究提供真实测试场景。通过全面公开模型、数据集、代码及检查点,Gaperon为探索多语言模型开发中数据清洗、评估、安全与开放性之间的权衡关系建立了可复现的研究基础。
English
We release Gaperon, a fully open suite of French-English-coding language models designed to advance transparency and reproducibility in large-scale model training. The Gaperon family includes 1.5B, 8B, and 24B parameter models trained on 2-4 trillion tokens, released with all elements of the training pipeline: French and English datasets filtered with a neural quality classifier, an efficient data curation and training framework, and hundreds of intermediate checkpoints. Through this work, we study how data filtering and contamination interact to shape both benchmark and generative performance. We find that filtering for linguistic quality enhances text fluency and coherence but yields subpar benchmark results, and that late deliberate contamination -- continuing training on data mixes that include test sets -- recovers competitive scores while only reasonably harming generation quality. We discuss how usual neural filtering can unintentionally amplify benchmark leakage. To support further research, we also introduce harmless data poisoning during pretraining, providing a realistic testbed for safety studies. By openly releasing all models, datasets, code, and checkpoints, Gaperon establishes a reproducible foundation for exploring the trade-offs between data curation, evaluation, safety, and openness in multilingual language model development.
PDF152December 2, 2025