ChatPaper.aiChatPaper

OneFlow:基於編輯流的並行多模態與交錯生成

OneFlow: Concurrent Mixed-Modal and Interleaved Generation with Edit Flows

October 3, 2025
作者: John Nguyen, Marton Havasi, Tariq Berrada, Luke Zettlemoyer, Ricky T. Q. Chen
cs.AI

摘要

我們提出了OneFlow,這是首個非自回歸的多模態模型,能夠實現可變長度且並行的混合模態生成。與強制文本和圖像生成之間嚴格因果順序的自回歸模型不同,OneFlow結合了基於插入的Edit Flow用於離散文本標記,以及Flow Matching用於圖像潛在表示。OneFlow通過分層採樣實現並行的文本-圖像合成,優先考慮內容而非語法。通過在1B到8B模型規模上的控制實驗,我們證明OneFlow在生成和理解任務上均優於自回歸基線模型,同時訓練所需的FLOPs最多減少50%。OneFlow超越了自回歸和基於擴散的方法,並開啟了並行生成、迭代優化和類自然推理生成的新能力。
English
We present OneFlow, the first non-autoregressive multimodal model that enables variable-length and concurrent mixed-modal generation. Unlike autoregressive models that enforce rigid causal ordering between text and image generation, OneFlow combines an insertion-based Edit Flow for discrete text tokens with Flow Matching for image latents. OneFlow enables concurrent text-image synthesis with hierarchical sampling that prioritizes content over grammar. Through controlled experiments across model sizes from 1B to 8B, we demonstrate that OneFlow outperforms autoregressive baselines on both generation and understanding tasks while using up to 50% fewer training FLOPs. OneFlow surpasses both autoregressive and diffusion-based approaches while unlocking new capabilities for concurrent generation, iterative refinement, and natural reasoning-like generation.
PDF94October 8, 2025