ChatPaper.aiChatPaper

ATLAS:多语言预训练、微调及破解多语言诅咒的自适应迁移缩放法则

ATLAS: Adaptive Transfer Scaling Laws for Multilingual Pretraining, Finetuning, and Decoding the Curse of Multilinguality

October 24, 2025
作者: Shayne Longpre, Sneha Kudugunta, Niklas Muennighoff, I-Hung Hsu, Isaac Caswell, Alex Pentland, Sercan Arik, Chen-Yu Lee, Sayna Ebrahimi
cs.AI

摘要

目前缩放定律的研究过度集中于英语领域,而最前沿的人工智能模型却明确服务于全球数十亿用户。本研究开展了迄今为止规模最大的多语言缩放定律分析,累计完成774项多语言训练实验,涵盖1000万至80亿参数规模、400多种训练语言及48种评估语言。我们提出的自适应迁移缩放定律(ATLAS)在单语与多语预训练场景中均表现优异,其样本外泛化能力相较现有缩放定律普遍提升超过0.3个R²值。通过实验分析,我们揭示了多语言学习动态机制、语言间迁移特性以及多语化诅咒现象。首先,我们推导出跨语言迁移矩阵,实证测量了38×38=1444组语言对的相互增益指数;其次,建立了语言无关的缩放定律,揭示在扩展语言种类时如何优化模型规模与数据配置以保持性能;最后,确定了从零开始预训练与基于多语检查点微调的计算效益临界点。这些发现有望为缩放定律的跨语言普及提供科学基础,助力开发者突破英语优先的人工智能开发范式,实现模型的高效扩展。
English
Scaling laws research has focused overwhelmingly on English -- yet the most prominent AI models explicitly serve billions of international users. In this work, we undertake the largest multilingual scaling laws study to date, totaling 774 multilingual training experiments, spanning 10M-8B model parameters, 400+ training languages and 48 evaluation languages. We introduce the Adaptive Transfer Scaling Law (ATLAS) for both monolingual and multilingual pretraining, which outperforms existing scaling laws' out-of-sample generalization often by more than 0.3 R^2. Our analyses of the experiments shed light on multilingual learning dynamics, transfer properties between languages, and the curse of multilinguality. First, we derive a cross-lingual transfer matrix, empirically measuring mutual benefit scores between 38 x 38=1444 language pairs. Second, we derive a language-agnostic scaling law that reveals how to optimally scale model size and data when adding languages without sacrificing performance. Third, we identify the computational crossover points for when to pretrain from scratch versus finetune from multilingual checkpoints. We hope these findings provide the scientific foundation for democratizing scaling laws across languages, and enable practitioners to efficiently scale models -- beyond English-first AI.
PDF181December 1, 2025