ChatPaper.aiChatPaper

立方離散擴散:高維表示標記上的離散視覺生成

Cubic Discrete Diffusion: Discrete Visual Generation on High-Dimensional Representation Tokens

March 19, 2026
作者: Yuqing Wang, Chuofan Ma, Zhijie Lin, Yao Teng, Lijun Yu, Shuai Wang, Jiaming Han, Jiashi Feng, Yi Jiang, Xihui Liu
cs.AI

摘要

基於離散標記的視覺生成技術因能與語言模型共享統一的標記預測範式,有望實現無縫的多模態架構而備受關注。然而現有的離散生成方法仍侷限於低維潛在標記(通常為8-32維),犧牲了理解任務所需的語義豐富性。雖然高維預訓練表徵(768-1024維)可彌合此差距,但其離散生成存在根本性挑戰。本文提出立方離散擴散模型(CubiD),首個實現高維表徵離散生成的模型。CubiD在高維離散表徵中進行細粒度掩碼——任何位置的任何維度均可被掩碼並根據部分觀測值進行預測。該機制使模型能學習空間位置內與跨位置的豐富關聯性,且生成步數固定為T(與特徵維度無關),滿足T遠小於hwd。在ImageNet-256數據集上,CubiD以9億至37億參數規模實現了最優的離散生成性能,並展現出強勁的擴展性。關鍵在於,我們驗證了這些離散化標記能保持原始表徵能力,證明同一套離散標記可同時有效服務於理解與生成任務。本研究有望推動統一多模態架構的未來探索。程式碼已開源於:https://github.com/YuqingWang1029/CubiD。
English
Visual generation with discrete tokens has gained significant attention as it enables a unified token prediction paradigm shared with language models, promising seamless multimodal architectures. However, current discrete generation methods remain limited to low-dimensional latent tokens (typically 8-32 dims), sacrificing the semantic richness essential for understanding. While high-dimensional pretrained representations (768-1024 dims) could bridge this gap, their discrete generation poses fundamental challenges. In this paper, we present Cubic Discrete Diffusion (CubiD), the first discrete generation model for high-dimensional representations. CubiD performs fine-grained masking throughout the high-dimensional discrete representation -- any dimension at any position can be masked and predicted from partial observations. This enables the model to learn rich correlations both within and across spatial positions, with the number of generation steps fixed at T regardless of feature dimensionality, where T ll hwd. On ImageNet-256, CubiD achieves state-of-the-art discrete generation with strong scaling behavior from 900M to 3.7B parameters. Crucially, we validate that these discretized tokens preserve original representation capabilities, demonstrating that the same discrete tokens can effectively serve both understanding and generation tasks. We hope this work will inspire future research toward unified multimodal architectures. Code is available at: https://github.com/YuqingWang1029/CubiD.
PDF261March 21, 2026