潜在去噪成就优质视觉分词器
Latent Denoising Makes Good Visual Tokenizers
July 21, 2025
作者: Jiawei Yang, Tianhong Li, Lijie Fan, Yonglong Tian, Yue Wang
cs.AI
摘要
尽管视觉分词器在生成模型中扮演着基础角色,但究竟哪些特性能够使其在生成建模中更为高效,这一问题尚未明晰。我们注意到,现代生成模型在训练目标上具有概念上的相似性——即从被高斯噪声或掩码等破坏的输入中重建清晰信号,这一过程我们称之为去噪。受此启发,我们提出将分词器嵌入直接与下游去噪目标对齐,促使潜在嵌入即使在严重受损的情况下也能更易于重建。为实现这一目标,我们引入了潜在去噪分词器(l-DeTok),这是一种简单而有效的分词器,其训练目标是从受到插值噪声和随机掩码破坏的潜在嵌入中重建干净图像。在ImageNet 256x256数据集上的大量实验表明,我们的分词器在六种代表性生成模型中均显著优于标准分词器。我们的研究结果强调了去噪作为分词器开发的一项基本设计原则,并期望这一发现能为未来分词器设计激发新的视角。
English
Despite their fundamental role, it remains unclear what properties could make
visual tokenizers more effective for generative modeling. We observe that
modern generative models share a conceptually similar training objective --
reconstructing clean signals from corrupted inputs such as Gaussian noise or
masking -- a process we term denoising. Motivated by this insight, we propose
aligning tokenizer embeddings directly with the downstream denoising objective,
encouraging latent embeddings to be more easily reconstructed even when heavily
corrupted. To achieve this, we introduce the Latent Denoising Tokenizer
(l-DeTok), a simple yet effective tokenizer trained to reconstruct clean images
from latent embeddings corrupted by interpolative noise and random masking.
Extensive experiments on ImageNet 256x256 demonstrate that our tokenizer
consistently outperforms standard tokenizers across six representative
generative models. Our findings highlight denoising as a fundamental design
principle for tokenizer development, and we hope it could motivate new
perspectives for future tokenizer design.