第三步是規模大且經濟實惠:模型與系統的協同設計,實現成本效益的解碼
Step-3 is Large yet Affordable: Model-system Co-design for Cost-effective Decoding
July 25, 2025
作者: StepFun, Bin Wang, Bojun Wang, Changyi Wan, Guanzhe Huang, Hanpeng Hu, Haonan Jia, Hao Nie, Mingliang Li, Nuo Chen, Siyu Chen, Song Yuan, Wuxun Xie, Xiaoniu Song, Xing Chen, Xingping Yang, Xuelin Zhang, Yanbo Yu, Yaoyu Wang, Yibo Zhu, Yimin Jiang, Yu Zhou, Yuanwei Lu, Houyi Li, Jingcheng Hu, Ka Man Lo, Ailin Huang, Binxing Jiao, Bo Li, Boyu Chen, Changxin Miao, Chang Lou, Chen Hu, Chen Xu, Chenfeng Yu, Chengyuan Yao, Daokuan Lv, Dapeng Shi, Deshan Sun, Ding Huang, Dingyuan Hu, Dongqing Pang, Enle Liu, Fajie Zhang, Fanqi Wan, Gulin Yan, Han Zhang, Han Zhou, Hanghao Wu, Hangyu Guo, Hanqi Chen, Hanshan Zhang, Hao Wu, Haocheng Zhang, Haolong Yan, Haoran Lv, Haoran Wei, Hebin Zhou, Heng Wang, Heng Wang, Hongxin Li, Hongyu Zhou, Hongyuan Wang, Huiyong Guo, Jia Wang, Jiahao Gong, Jialing Xie, Jian Zhou, Jianjian Sun, Jiaoren Wu, Jiaran Zhang, Jiayu Liu, Jie Cheng, Jie Luo, Jie Yan, Jie Yang, Jieyi Hou, Jinguang Zhang, Jinlan Cao, Jisheng Yin, Junfeng Liu, Junhao Huang, Junzhe Lin, Kaijun Tan, Kaixiang Li, Kang An, Kangheng Lin, Kenkun Liu, Lei Yang, Liang Zhao, Liangyu Chen, Lieyu Shi, Liguo Tan, Lin Lin, Lin Zhang, Lina Chen, Liwen Huang, Liying Shi, Longlong Gu, Mei Chen, Mengqiang Ren, Ming Li, Mingzhe Chen, Na Wang, Nan Wu, Qi Han, Qian Zhao, Qiang Zhang, Qianni Liu, Qiaohui Chen, Qiling Wu, Qinglin He, Qinyuan Tan, Qiufeng Wang, Qiuping Wu, Qiuyan Liang, Quan Sun, Rui Li, Ruihang Miao, Ruosi Wan, Ruyan Guo, Shangwu Zhong, Shaoliang Pang, Shengjie Fan, Shijie Shang, Shilei Jiang, Shiliang Yang, Shiming Hao, Shuli Gao, Siming Huang, Siqi Liu, Tiancheng Cao, Tianhao Cheng, Tianhao Peng, Wang You, Wei Ji, Wen Sun, Wenjin Deng, Wenqing He, Wenzhen Zheng, Xi Chen, Xiangwen Kong, Xianzhen Luo, Xiaobo Yang, Xiaojia Liu, Xiaoxiao Ren, Xin Han, Xin Li, Xin Wu, Xu Zhao, Yanan Wei, Yang Li, Yangguang Li, Yangshijie Xu, Yanming Xu, Yaqiang Shi, Yeqing Shen, Yi Yang, Yifei Yang, Yifeng Gong, Yihan Chen, Yijing Yang, Yinmin Zhang, Yizhuang Zhou, Yuanhao Ding, Yuantao Fan, Yuanzhen Yang, Yuchu Luo, Yue Peng, Yufan Lu, Yuhang Deng, Yuhe Yin, Yujie Liu, Yukun Chen, Yuling Zhao, Yun Mou, Yunlong Li, Yunzhou Ju, Yusheng Li, Yuxiang Yang, Yuxiang Zhang, Yuyang Chen, Zejia Weng, Zhe Xie, Zheng Ge, Zheng Gong, Zhenyi Lu, Zhewei Huang, Zhichao Chang, Zhiguo Huang, Zhirui Wang, Zidong Yang, Zili Wang, Ziqi Wang, Zixin Zhang, Binxing Jiao, Daxin Jiang, Heung-Yeung Shum, Xiangyu Zhang
cs.AI
摘要
大型語言模型(LLMs)在解碼過程中面臨硬件效率低下的問題,尤其是在長上下文推理任務中。本文介紹了Step-3,這是一個擁有3210億參數的視覺語言模型(VLM),通過硬件感知的模型-系統協同設計,優化了解碼成本的最小化。Step-3在兩個關鍵維度上進行了創新:(1)一種新穎的多矩陣分解注意力機制(MFA),顯著減少了鍵值緩存(KV cache)的大小和計算量,同時保持了高度的注意力表達能力;(2)注意力-前饋網絡解耦(AFD),這是一種分佈式推理系統,將注意力層和前饋網絡(FFN)層解耦為專門的子系統。這種協同設計實現了前所未有的成本效率:與DeepSeek-V3和Qwen3 MoE 235B等模型相比,Step-3顯著降低了理論解碼成本,並且在更長的上下文中增益更為明顯。Step-3在每個token激活380億參數(多於DeepSeek-V3和Qwen3 MoE 235B)的情況下實現了低成本,表明硬件對齊的注意力算術強度、MoE稀疏性和AFD對成本效益至關重要。我們在DeepSeek-V3的有利場景下進行了正面比較。我們在Hopper GPU上的實現,在50ms TPOT SLA(4K上下文,FP8,無MTP)下,達到了每GPU每秒高達4,039個token的解碼吞吐量。這在相同設置下高於DeepSeek-V3的2,324,並為LLM解碼設定了新的帕累托前沿。
English
Large language models (LLMs) face low hardware efficiency during decoding,
especially for long-context reasoning tasks. This paper introduces Step-3, a
321B-parameter VLM with hardware-aware model-system co-design optimized for
minimizing decoding costs. Step-3 innovates in two key dimensions: (1) A novel
Multi-Matrix Factorization Attention (MFA) mechanism that significantly reduces
both KV cache size and computation while maintaining high attention
expressiveness, and (2) Attention-FFN Disaggregation (AFD), a distributed
inference system that decouples attention and Feed-Forward Network (FFN) layers
into specialized subsystems. This co-design achieves unprecedented cost
efficiency: Step-3 significantly reduces theoretical decoding costs compared
with models like DeepSeek-V3 and Qwen3 MoE 235B, with the gains widening at
longer context. Step-3 achieves low cost while activating 38B parameters per
token (more than DeepSeek-V3 and Qwen3 MoE 235B), demonstrating that
hardware-aligned attention arithmetic intensity, MoE sparsity, and AFD are
critical to cost-effectiveness. We perform a head-to-head comparison with
DeepSeek-V3 in its favorable scenarios. Our implementation on Hopper GPUs
achieves a decoding throughput of up to 4,039 tokens per second per GPU under
50ms TPOT SLA (4K context, FP8, no MTP). It is higher than DeepSeek-V3's 2,324
in the same setup and sets a new Pareto frontier for LLM decoding.