INTELLECT-3:技術報告
INTELLECT-3: Technical Report
December 18, 2025
作者: Prime Intellect Team, Mika Senghaas, Fares Obeid, Sami Jaghouar, William Brown, Jack Min Ong, Daniel Auras, Matej Sirovatka, Jannik Straube, Andrew Baker, Sebastian Müller, Justus Mattern, Manveer Basra, Aiman Ismail, Dominik Scherm, Cooper Miller, Ameen Patel, Simon Kirsten, Mario Sieg, Christian Reetz, Kemal Erdem, Vincent Weisser, Johannes Hagemann
cs.AI
摘要
我們推出INTELLECT-3——一個基於端到端強化學習基礎設施棧訓練的1060億參數混合專家模型(活躍參數120億)。該模型在數學、編程、科學和推理基準測試中,以同等規模實現了最先進的性能表現,超越許多參數量更大的前沿模型。我們將開源該模型及其完整創建基礎設施棧,包括強化學習框架、完整訓練方案,以及通過驗證器庫構建、來自環境中心社區平台的豐富訓練評估環境集。為此我們同步推出prime-rl開放框架,這款專為大規模異步強化學習設計的框架可實現從單節點到數千張GPU的無縫擴展,並針對智能體強化學習特性提供多輪交互與工具使用的原生支持。基於該技術棧,我們在GLM-4.5-Air-Base模型基礎上同步開展SFT與RL訓練,成功將強化學習訓練擴展至512張H200 GPU並保持高訓練效率。
English
We present INTELLECT-3, a 106B-parameter Mixture-of-Experts model (12B active) trained with large-scale reinforcement learning on our end-to-end RL infrastructure stack. INTELLECT-3 achieves state of the art performance for its size across math, code, science and reasoning benchmarks, outperforming many larger frontier models. We open-source the model together with the full infrastructure stack used to create it, including RL frameworks, complete recipe, and a wide collection of environments, built with the verifiers library, for training and evaluation from our Environments Hub community platform. Built for this effort, we introduce prime-rl, an open framework for large-scale asynchronous reinforcement learning, which scales seamlessly from a single node to thousands of GPUs, and is tailored for agentic RL with first-class support for multi-turn interactions and tool use. Using this stack, we run both SFT and RL training on top of the GLM-4.5-Air-Base model, scaling RL training up to 512 H200s with high training efficiency.