多元宇宙:你的语言模型在暗中决定如何并行化与融合生成
Multiverse: Your Language Models Secretly Decide How to Parallelize and Merge Generation
June 11, 2025
作者: Xinyu Yang, Yuwei An, Hongyi Liu, Tianqi Chen, Beidi Chen
cs.AI
摘要
自回归大语言模型(AR-LLMs)在序列生成过程中常展现出隐式的并行性。受此启发,我们提出了Multiverse,一种支持原生并行生成的新型生成模型。Multiverse内嵌了MapReduce范式,通过三个阶段自动生成:(i) Map阶段用于自适应任务分解,(ii) Process阶段并行执行子任务,(iii) Reduce阶段无损合成结果。随后,我们构建了一个现实世界的Multiverse推理模型,实现了数据、算法与系统的协同设计,从而能够快速无缝地从前沿AR-LLMs迁移。从序列推理链出发,我们利用自动化LLM辅助流程将其转化为结构化训练数据,创建了Multiverse 1K,避免了昂贵的人工标注。在算法层面,我们设计了Multiverse Attention,以分离并行推理步骤,同时保持与因果注意力的兼容性,确保高效训练。在系统层面,我们实现了Multiverse Engine以支持并行推理,其特色在于一个专用调度器,能够根据模型直接触发,在序列与并行生成间动态切换。经过3小时、1K样本的微调后,我们的Multiverse-32B成为唯一开源的非AR模型,其性能与同规模领先的AR-LLMs相当,AIME24和25得分分别为54%和46%。此外,预算控制实验显示,Multiverse-32B展现出更优的扩展性,在相同上下文长度下平均优于AR-LLMs 1.87%。这种扩展性进一步带来了实际效率提升,在不同批量大小下实现了最高2倍的加速。我们已开源整个Multiverse生态系统,包括数据、模型权重、引擎、支持工具,以及完整的数据整理提示和详细的训练与评估指南。
English
Autoregressive Large Language Models (AR-LLMs) frequently exhibit implicit
parallelism in sequential generation. Inspired by this, we introduce
Multiverse, a new generative model that enables natively parallel generation.
Multiverse internalizes a MapReduce paradigm, generating automatically through
three stages: (i) a Map stage for adaptive task decomposition, (ii) a Process
stage for parallel subtask execution, and (iii) a Reduce stage for lossless
result synthesis. Next, we build a real-world Multiverse reasoning model with
co-design of data, algorithm, and system, enabling rapid and seamless transfer
from frontier AR-LLMs. Starting from sequential reasoning chains, we create
Multiverse 1K by converting them into structured training data using an
automated LLM-assisted pipeline, avoiding costly human annotations.
Algorithmically, we design Multiverse Attention to separate parallel reasoning
steps while keeping compatibility with causal attention for efficient training.
Systematically, we implement Multiverse Engine to enable parallel inference. It
features a dedicated scheduler that dynamically switches between sequential and
parallel generation, triggered directly by the model. After a 3-hour
fine-tuning with 1K examples, our Multiverse-32B stands as the only
open-sourced non-AR model achieving performance on par with leading AR-LLMs of
the same scale, evidenced by AIME24 & 25 scores of 54% and 46%, respectively.
Moreover, our budget control experiments show that Multiverse-32B exhibits
superior scaling, outperforming AR-LLMs by 1.87% on average using the same
context length. Such scaling further leads to practical efficiency gain,
achieving up to 2x speedup across varying batch sizes. We have open-sourced the
entire Multiverse ecosystem, including data, model weights, engine, supporting
tools, as well as complete data curation prompts and detailed training and
evaluation recipes.