Gaperon:一款融合英法风味的生成式语言模型套件
Gaperon: A Peppered English-French Generative Language Model Suite
October 29, 2025
作者: Nathan Godey, Wissam Antoun, Rian Touchent, Rachel Bawden, Éric de la Clergerie, Benoît Sagot, Djamé Seddah
cs.AI
摘要
我们正式发布Gaperon——一套完全开源的法国-英语-代码语言模型系列,旨在推动大规模模型训练的透明化与可复现性研究。该系列包含15亿、80亿及240亿参数模型,基于2-4万亿token训练而成,并完整公开训练全流程要素:通过神经质量分类器筛选的法英双语数据集、高效数据清洗与训练框架、以及数百个中间检查点。本研究深入探讨了数据过滤与基准污染如何共同影响模型在标准测试与文本生成中的表现。研究发现:基于语言质量的过滤能提升文本流畅度与连贯性,但会导致基准测试成绩平庸;而后期刻意引入污染数据(在包含测试集的混合数据上继续训练)既可恢复竞争力指标,又仅对生成质量产生可控影响。我们同时揭示了常规神经过滤方法可能意外加剧基准泄露的现象。为支持安全研究,我们在预训练阶段引入了无害数据投毒机制,构建了贴近现实的安全研究测试环境。通过全面公开模型、数据集、代码及检查点,Gaperon为探索多语言模型开发中数据治理、评估、安全与开放性之间的平衡关系建立了可复现的研究基准。
English
We release Gaperon, a fully open suite of French-English-coding language
models designed to advance transparency and reproducibility in large-scale
model training. The Gaperon family includes 1.5B, 8B, and 24B parameter models
trained on 2-4 trillion tokens, released with all elements of the training
pipeline: French and English datasets filtered with a neural quality
classifier, an efficient data curation and training framework, and hundreds of
intermediate checkpoints. Through this work, we study how data filtering and
contamination interact to shape both benchmark and generative performance. We
find that filtering for linguistic quality enhances text fluency and coherence
but yields subpar benchmark results, and that late deliberate contamination --
continuing training on data mixes that include test sets -- recovers
competitive scores while only reasonably harming generation quality. We discuss
how usual neural filtering can unintentionally amplify benchmark leakage. To
support further research, we also introduce harmless data poisoning during
pretraining, providing a realistic testbed for safety studies. By openly
releasing all models, datasets, code, and checkpoints, Gaperon establishes a
reproducible foundation for exploring the trade-offs between data curation,
evaluation, safety, and openness in multilingual language model development.