ChatPaper.aiChatPaper

UniWeTok:面向统一多模态大语言模型的码本规模达2^{128}的统一二进制分词器

UniWeTok: An Unified Binary Tokenizer with Codebook Size 2^{128} for Unified Multimodal Large Language Model

February 15, 2026
作者: Shaobin Zhuang, Yuang Ai, Jiaming Han, Weijia Mao, Xiaohui Li, Fangyikang Wang, Xiao Wang, Yan Li, Shanchuan Lin, Kun Xu, Zhenheng Yang, Huaibo Huang, Xiangyu Yue, Hao Chen, Yali Wang
cs.AI

摘要

统一多模态大语言模型(MLLMs)需要一种能够同时支持高保真重建、复杂语义提取与生成适配性的视觉表征方法。然而,现有视觉分词器通常难以在单一框架内满足这些相互冲突的目标。本文提出UniWeTok——一种基于海量二进制码本(2^{128})的统一离散分词器来解决这一难题。在训练框架上,我们引入前后蒸馏技术与生成感知先验,以增强离散分词器的语义提取能力与生成先验特性。模型架构方面,我们设计了采用SigLu激活函数的卷积-注意力混合架构。SigLu激活函数不仅能够约束编码器输出、稳定语义蒸馏过程,还能有效化解分词熵损失与承诺损失之间的优化冲突。此外,我们提出三阶段训练框架,旨在提升UniWeTok对不同图像分辨率及感知敏感场景(如人脸与文本内容)的适应能力。在ImageNet数据集上,UniWeTok以极低的训练计算量(训练词元:UniWeTok 330亿 vs. REPA 2620亿)实现了最先进的图像生成性能(FID:UniWeTok 1.38 vs. REPA 1.42)。在通用领域,UniWeTok在多模态理解、图像生成(DPG评分:UniWeTok 86.63 vs. FLUX.1 [Dev] 83.84)与图像编辑(GEdit综合评分:UniWeTok 5.09 vs. OmniGen 5.06)等广泛任务中展现出卓越竞争力。我们公开代码与模型,以推动统一分词器及MLLM的社区探索。
English
Unified Multimodal Large Language Models (MLLMs) require a visual representation that simultaneously supports high-fidelity reconstruction, complex semantic extraction, and generative suitability. However, existing visual tokenizers typically struggle to satisfy these conflicting objectives within a single framework. In this paper, we introduce UniWeTok, a unified discrete tokenizer designed to bridge this gap using a massive binary codebook (2^{128}). For training framework, we introduce Pre-Post Distillation and a Generative-Aware Prior to enhance the semantic extraction and generative prior of the discrete tokens. In terms of model architecture, we propose a convolution-attention hybrid architecture with the SigLu activation function. SigLu activation not only bounds the encoder output and stabilizes the semantic distillation process but also effectively addresses the optimization conflict between token entropy loss and commitment loss. We further propose a three-stage training framework designed to enhance UniWeTok's adaptability cross various image resolutions and perception-sensitive scenarios, such as those involving human faces and textual content. On ImageNet, UniWeTok achieves state-of-the-art image generation performance (FID: UniWeTok 1.38 vs. REPA 1.42) while requiring a remarkably low training compute (Training Tokens: UniWeTok 33B vs. REPA 262B). On general-domain, UniWeTok demonstrates highly competitive capabilities across a broad range of tasks, including multimodal understanding, image generation (DPG Score: UniWeTok 86.63 vs. FLUX.1 [Dev] 83.84), and editing (GEdit Overall Score: UniWeTok 5.09 vs. OmniGen 5.06). We release code and models to facilitate community exploration of unified tokenizer and MLLM.
PDF102February 18, 2026