ChatPaper.aiChatPaper

INTELLECT-3技术报告

INTELLECT-3: Technical Report

December 18, 2025
作者: Prime Intellect Team, Mika Senghaas, Fares Obeid, Sami Jaghouar, William Brown, Jack Min Ong, Daniel Auras, Matej Sirovatka, Jannik Straube, Andrew Baker, Sebastian Müller, Justus Mattern, Manveer Basra, Aiman Ismail, Dominik Scherm, Cooper Miller, Ameen Patel, Simon Kirsten, Mario Sieg, Christian Reetz, Kemal Erdem, Vincent Weisser, Johannes Hagemann
cs.AI

摘要

我们推出INTELLECT-3——一个基于端到端强化学习基础设施栈训练、拥有1060亿参数(激活120亿)的混合专家模型。该模型在数学、编程、科学和推理等基准测试中,以同等规模实现了最先进的性能表现,超越了许多参数更大的前沿模型。我们将模型连同其完整构建基础设施开源发布,包括强化学习框架、完整训练方案,以及通过验证器库构建的、来自Environments Hub社区平台的丰富训练评估环境集合。为此我们全新开发了prime-rl框架,这是一个支持大规模异步强化学习的开源架构,可实现从单节点到数千张GPU的无缝扩展,并专门为智能体强化学习设计,原生支持多轮交互与工具调用功能。基于该技术栈,我们在GLM-4.5-Air-Base模型基础上同步进行了监督微调与强化学习训练,最终在512张H200显卡上实现了高训练效率的大规模强化学习训练。
English
We present INTELLECT-3, a 106B-parameter Mixture-of-Experts model (12B active) trained with large-scale reinforcement learning on our end-to-end RL infrastructure stack. INTELLECT-3 achieves state of the art performance for its size across math, code, science and reasoning benchmarks, outperforming many larger frontier models. We open-source the model together with the full infrastructure stack used to create it, including RL frameworks, complete recipe, and a wide collection of environments, built with the verifiers library, for training and evaluation from our Environments Hub community platform. Built for this effort, we introduce prime-rl, an open framework for large-scale asynchronous reinforcement learning, which scales seamlessly from a single node to thousands of GPUs, and is tailored for agentic RL with first-class support for multi-turn interactions and tool use. Using this stack, we run both SFT and RL training on top of the GLM-4.5-Air-Base model, scaling RL training up to 512 H200s with high training efficiency.
PDF81December 25, 2025