ChatPaper.aiChatPaper

Qwen3-Coder-Next技术报告

Qwen3-Coder-Next Technical Report

February 28, 2026
作者: Ruisheng Cao, Mouxiang Chen, Jiawei Chen, Zeyu Cui, Yunlong Feng, Binyuan Hui, Yuheng Jing, Kaixin Li, Mingze Li, Junyang Lin, Zeyao Ma, Kashun Shum, Xuwu Wang, Jinxi Wei, Jiaxi Yang, Jiajun Zhang, Lei Zhang, Zongmeng Zhang, Wenting Zhao, Fan Zhou
cs.AI

摘要

我们推出Qwen3-Coder-Next——专为代码智能体设计的开源权重语言模型。该模型拥有800亿参数规模,在推理时仅激活30亿参数,实现了高效推理与强大编码能力的平衡。本研究旨在探索先进训练方法如何突破小参数规模模型的能力极限。为实现这一目标,我们通过大规模合成可验证编程任务及其可执行环境进行智能体训练,借助训练中期反馈和强化学习直接从环境反馈中学习。在SWE-Bench和Terminal-Bench等智能体核心评测中,Qwen3-Coder-Next相对于其激活参数量展现出卓越性能。我们同步开源基础版本和指令微调版本,以支持代码智能体的学术研究与实际应用开发。
English
We present Qwen3-Coder-Next, an open-weight language model specialized for coding agents. Qwen3-Coder-Next is an 80-billion-parameter model that activates only 3 billion parameters during inference, enabling strong coding capability with efficient inference. In this work, we explore how far strong training recipes can push the capability limits of models with small parameter footprints. To achieve this, we perform agentic training through large-scale synthesis of verifiable coding tasks paired with executable environments, allowing learning directly from environment feedback via mid-training and reinforcement learning. Across agent-centric benchmarks including SWE-Bench and Terminal-Bench, Qwen3-Coder-Next achieves competitive performance relative to its active parameter count. We release both base and instruction-tuned open-weight versions to support research and real-world coding agent development.
PDF645May 8, 2026