ChatPaper.aiChatPaper

OneFlow:支持并发多模态与交错生成的编辑流技术

OneFlow: Concurrent Mixed-Modal and Interleaved Generation with Edit Flows

October 3, 2025
作者: John Nguyen, Marton Havasi, Tariq Berrada, Luke Zettlemoyer, Ricky T. Q. Chen
cs.AI

摘要

我们提出了OneFlow,这是首个支持可变长度并发多模态生成的非自回归多模态模型。与强制文本和图像生成之间严格因果顺序的自回归模型不同,OneFlow结合了基于插入的离散文本标记编辑流(Edit Flow)与图像潜变量的流匹配(Flow Matching)。OneFlow通过分层采样实现了并发的文本-图像合成,优先考虑内容而非语法。通过在1B到8B不同模型规模上的控制实验,我们证明OneFlow在生成和理解任务上均优于自回归基线模型,同时训练所需的FLOPs最多减少50%。OneFlow不仅超越了自回归和基于扩散的方法,还解锁了并发生成、迭代优化以及类自然推理生成等新能力。
English
We present OneFlow, the first non-autoregressive multimodal model that enables variable-length and concurrent mixed-modal generation. Unlike autoregressive models that enforce rigid causal ordering between text and image generation, OneFlow combines an insertion-based Edit Flow for discrete text tokens with Flow Matching for image latents. OneFlow enables concurrent text-image synthesis with hierarchical sampling that prioritizes content over grammar. Through controlled experiments across model sizes from 1B to 8B, we demonstrate that OneFlow outperforms autoregressive baselines on both generation and understanding tasks while using up to 50% fewer training FLOPs. OneFlow surpasses both autoregressive and diffusion-based approaches while unlocking new capabilities for concurrent generation, iterative refinement, and natural reasoning-like generation.
PDF94October 8, 2025