Calibri:透過參數高效校準增強擴散轉換器
Calibri: Enhancing Diffusion Transformers via Parameter-Efficient Calibration
March 25, 2026
作者: Danil Tokhchukov, Aysel Mirzoeva, Andrey Kuznetsov, Konstantin Sobolev
cs.AI
摘要
本研究中,我們揭示了擴散轉換器(DiT)在提升生成任務效能方面的潛力。透過對去噪過程的深入分析,我們證明僅需引入單一可學習的縮放參數,即可顯著改善DiT模塊的表現。基於此發現,我們提出Calibri——一種參數高效的優化方法,通過對DiT組件進行最佳化校準來提升生成品質。Calibri將DiT校準問題建模為黑盒獎勵優化任務,並採用演化算法高效求解,僅需調整約100個參數。實驗結果表明,儘管設計輕量,Calibri能在各類文生圖模型中持續提升效能。值得注意的是,該方法在保持高品質輸出的同時,還能減少圖像生成所需的推斷步驟。
English
In this paper, we uncover the hidden potential of Diffusion Transformers (DiTs) to significantly enhance generative tasks. Through an in-depth analysis of the denoising process, we demonstrate that introducing a single learned scaling parameter can significantly improve the performance of DiT blocks. Building on this insight, we propose Calibri, a parameter-efficient approach that optimally calibrates DiT components to elevate generative quality. Calibri frames DiT calibration as a black-box reward optimization problem, which is efficiently solved using an evolutionary algorithm and modifies just ~100 parameters. Experimental results reveal that despite its lightweight design, Calibri consistently improves performance across various text-to-image models. Notably, Calibri also reduces the inference steps required for image generation, all while maintaining high-quality outputs.