# Qwen3-Coder-Next技术报告
Qwen3-Coder-Next Technical Report
February 28, 2026
作者: Ruisheng Cao, Mouxiang Chen, Jiawei Chen, Zeyu Cui, Yunlong Feng, Binyuan Hui, Yuheng Jing, Kaixin Li, Mingze Li, Junyang Lin, Zeyao Ma, Kashun Shum, Xuwu Wang, Jinxi Wei, Jiaxi Yang, Jiajun Zhang, Lei Zhang, Zongmeng Zhang, Wenting Zhao, Fan Zhou
cs.AI
摘要
我们推出Qwen3-Coder-Next——专为代码智能体设计的开放权重语言模型。该模型拥有800亿参数规模,在推理时仅激活30亿参数,既能实现强大的代码能力,又具备高效的推理效率。本研究旨在探索:通过强化训练方法,小参数规模模型的性能极限能被提升至何种程度。为此,我们通过大规模合成可验证编程任务与可执行环境进行智能体训练,使模型能通过训练中期的环境反馈和强化学习直接获取知识。在SWE-Bench、Terminal-Bench等以智能体为核心的基准测试中,Qwen3-Coder-Next在其激活参数量级上展现出卓越的竞争力。我们同步开放基础版和指令微调版的开放权重模型,以支持学术界与工业界的代码智能体研发工作。
English
We present Qwen3-Coder-Next, an open-weight language model specialized for coding agents. Qwen3-Coder-Next is an 80-billion-parameter model that activates only 3 billion parameters during inference, enabling strong coding capability with efficient inference. In this work, we explore how far strong training recipes can push the capability limits of models with small parameter footprints. To achieve this, we perform agentic training through large-scale synthesis of verifiable coding tasks paired with executable environments, allowing learning directly from environment feedback via mid-training and reinforcement learning. Across agent-centric benchmarks including SWE-Bench and Terminal-Bench, Qwen3-Coder-Next achieves competitive performance relative to its active parameter count. We release both base and instruction-tuned open-weight versions to support research and real-world coding agent development.