ChatPaper.aiChatPaper

回声精馏:面向一步扩散个性化的双向概念蒸馏

EchoDistill: Bidirectional Concept Distillation for One-Step Diffusion Personalization

October 23, 2025
作者: Yixiong Yang, Tao Wu, Senmao Li, Shiqi Yang, Yaxing Wang, Joost van de Weijer, Kai Wang
cs.AI

摘要

近期,文本到图像扩散模型的加速技术已实现仅需单步即可生成高保真度图像。然而,由于单步模型有效捕捉新概念分布的能力有限,如何对这些模型进行个性化改造以融入新概念仍具挑战。我们提出双向概念蒸馏框架EchoDistill,以实现单步扩散个性化。该方法采用端到端训练流程,同步训练多步扩散模型(教师)与单步扩散模型(学生)。概念首先从教师模型蒸馏至学生模型,再通过回声机制从学生模型反馈至教师模型。在EchoDistill过程中,我们共享双模型的文本编码器以确保语义理解的一致性。随后,学生模型通过对抗损失优化以对齐真实图像分布,并通过对齐损失保持与教师模型输出的一致性。此外,我们引入双向回声优化策略:学生模型利用其快速生成能力向教师模型提供反馈。这种双向概念蒸馏机制不仅增强了学生模型对新概念的个性化能力,还提升了教师模型的生成质量。实验表明,该协作框架在单步扩散个性化设定下显著优于现有个性化方法,为T2I扩散模型建立了快速有效的个性化新范式。
English
Recent advances in accelerating text-to-image (T2I) diffusion models have enabled the synthesis of high-fidelity images even in a single step. However, personalizing these models to incorporate novel concepts remains a challenge due to the limited capacity of one-step models to capture new concept distributions effectively. We propose a bidirectional concept distillation framework, EchoDistill, to enable one-step diffusion personalization (1-SDP). Our approach involves an end-to-end training process where a multi-step diffusion model (teacher) and a one-step diffusion model (student) are trained simultaneously. The concept is first distilled from the teacher model to the student, and then echoed back from the student to the teacher. During the EchoDistill, we share the text encoder between the two models to ensure consistent semantic understanding. Following this, the student model is optimized with adversarial losses to align with the real image distribution and with alignment losses to maintain consistency with the teacher's output. Furthermore, we introduce the bidirectional echoing refinement strategy, wherein the student model leverages its faster generation capability to feedback to the teacher model. This bidirectional concept distillation mechanism not only enhances the student ability to personalize novel concepts but also improves the generative quality of the teacher model. Our experiments demonstrate that this collaborative framework significantly outperforms existing personalization methods over the 1-SDP setup, establishing a novel paradigm for rapid and effective personalization in T2I diffusion models.
PDF31December 31, 2025