ChatPaper.aiChatPaper

流形学习:通过表征编码器解锁标准扩散变换器

Learning on the Manifold: Unlocking Standard Diffusion Transformers with Representation Encoders

February 10, 2026
作者: Amandeep Kumar, Vishal M. Patel
cs.AI

摘要

利用表徵編碼器進行生成式建模為實現高效、高保真合成提供了可行路徑。然而,標準擴散變換器無法直接收斂於此類表徵。儘管近期研究將其歸因於容量瓶頸,並提出計算成本高昂的擴散變換器寬度擴展方案,我們通過實證表明該失效本質源於幾何特性。我們發現「幾何干擾」是根本原因:標準歐幾里得流匹配迫使概率路徑穿過表徵編碼器超球形特徵空間的低密度內域,而非沿流形表面行進。為解決此問題,我們提出帶有雅可比正則化的黎曼流匹配(RJF)。該方法通過將生成過程約束於流形測地線並修正曲率導致的誤差傳播,使標準擴散變換器架構無需寬度擴展即可收斂。我們的RJF方法使標準DiT-B架構(1.31億參數)實現有效收斂,在已有方法無法收斂的場景下達到3.37的FID指標。代碼地址:https://github.com/amandpkr/RJF
English
Leveraging representation encoders for generative modeling offers a path for efficient, high-fidelity synthesis. However, standard diffusion transformers fail to converge on these representations directly. While recent work attributes this to a capacity bottleneck proposing computationally expensive width scaling of diffusion transformers we demonstrate that the failure is fundamentally geometric. We identify Geometric Interference as the root cause: standard Euclidean flow matching forces probability paths through the low-density interior of the hyperspherical feature space of representation encoders, rather than following the manifold surface. To resolve this, we propose Riemannian Flow Matching with Jacobi Regularization (RJF). By constraining the generative process to the manifold geodesics and correcting for curvature-induced error propagation, RJF enables standard Diffusion Transformer architectures to converge without width scaling. Our method RJF enables the standard DiT-B architecture (131M parameters) to converge effectively, achieving an FID of 3.37 where prior methods fail to converge. Code: https://github.com/amandpkr/RJF
PDF11February 12, 2026