FastFit:通过可缓存扩散模型加速多参考虚拟试穿
FastFit: Accelerating Multi-Reference Virtual Try-On via Cacheable Diffusion Models
August 28, 2025
作者: Zheng Chong, Yanwei Lei, Shiyue Zhang, Zhuandi He, Zhen Wang, Xujie Zhang, Xiao Dong, Yiling Wu, Dongmei Jiang, Xiaodan Liang
cs.AI
摘要
尽管虚拟试衣技术潜力巨大,但其实际应用仍面临两大挑战:现有方法无法支持多参考服饰组合(包括服装和配饰),以及由于在每次去噪步骤中重复计算参考特征导致的显著低效。为解决这些问题,我们提出了FastFit,一个基于新型可缓存扩散架构的高速多参考虚拟试衣框架。通过采用半注意力机制,并用类别嵌入替换传统的时序嵌入来表征参考物品,我们的模型以可忽略的参数开销,完全将参考特征编码与去噪过程解耦。这使得参考特征只需计算一次,即可在所有步骤中无损复用,从根本上突破了效率瓶颈,相比同类方法平均提速3.5倍。此外,为促进复杂多参考虚拟试衣的研究,我们引入了DressCode-MR,一个全新的大规模数据集。该数据集包含28,179组高质量配对图像,涵盖五大关键类别(上衣、下装、连衣裙、鞋子和包包),通过专家模型与人工反馈优化的流程构建。在VITON-HD、DressCode及我们提出的DressCode-MR数据集上的大量实验表明,FastFit在关键保真度指标上超越了现有最先进方法,同时显著提升了推理效率。
English
Despite its great potential, virtual try-on technology is hindered from
real-world application by two major challenges: the inability of current
methods to support multi-reference outfit compositions (including garments and
accessories), and their significant inefficiency caused by the redundant
re-computation of reference features in each denoising step. To address these
challenges, we propose FastFit, a high-speed multi-reference virtual try-on
framework based on a novel cacheable diffusion architecture. By employing a
Semi-Attention mechanism and substituting traditional timestep embeddings with
class embeddings for reference items, our model fully decouples reference
feature encoding from the denoising process with negligible parameter overhead.
This allows reference features to be computed only once and losslessly reused
across all steps, fundamentally breaking the efficiency bottleneck and
achieving an average 3.5x speedup over comparable methods. Furthermore, to
facilitate research on complex, multi-reference virtual try-on, we introduce
DressCode-MR, a new large-scale dataset. It comprises 28,179 sets of
high-quality, paired images covering five key categories (tops, bottoms,
dresses, shoes, and bags), constructed through a pipeline of expert models and
human feedback refinement. Extensive experiments on the VITON-HD, DressCode,
and our DressCode-MR datasets show that FastFit surpasses state-of-the-art
methods on key fidelity metrics while offering its significant advantage in
inference efficiency.