ChatPaper.aiChatPaper

时序上下文微调:实现视频扩散模型的多功能控制

Temporal In-Context Fine-Tuning for Versatile Control of Video Diffusion Models

June 1, 2025
作者: Kinam Kim, Junha Hyung, Jaegul Choo
cs.AI

摘要

近期,文本到视频扩散模型的进展已实现了高质量视频合成,但在有限数据和计算资源下的可控生成仍具挑战。现有的条件生成微调方法通常依赖外部编码器或架构修改,这需要大规模数据集,且通常局限于空间对齐的条件设置,限制了灵活性和可扩展性。本研究中,我们提出了时序上下文微调(Temporal In-Context Fine-Tuning, TIC-FT),一种高效且通用的方法,用于将预训练的视频扩散模型适配于多样化的条件生成任务。我们的核心思想是沿时间轴将条件帧与目标帧拼接,并插入噪声水平逐渐增加的中间缓冲帧。这些缓冲帧实现了平滑过渡,使微调过程与预训练模型的时序动态保持一致。TIC-FT无需改变模型架构,仅需10至30个训练样本即可实现强劲性能。我们在一系列任务上验证了该方法,包括图像到视频和视频到视频生成,使用了如CogVideoX-5B和Wan-14B等大规模基础模型。大量实验表明,TIC-FT在条件忠实度和视觉质量上均优于现有基线,同时在训练和推理过程中保持高效。更多结果,请访问https://kinam0252.github.io/TIC-FT/。
English
Recent advances in text-to-video diffusion models have enabled high-quality video synthesis, but controllable generation remains challenging, particularly under limited data and compute. Existing fine-tuning methods for conditional generation often rely on external encoders or architectural modifications, which demand large datasets and are typically restricted to spatially aligned conditioning, limiting flexibility and scalability. In this work, we introduce Temporal In-Context Fine-Tuning (TIC-FT), an efficient and versatile approach for adapting pretrained video diffusion models to diverse conditional generation tasks. Our key idea is to concatenate condition and target frames along the temporal axis and insert intermediate buffer frames with progressively increasing noise levels. These buffer frames enable smooth transitions, aligning the fine-tuning process with the pretrained model's temporal dynamics. TIC-FT requires no architectural changes and achieves strong performance with as few as 10-30 training samples. We validate our method across a range of tasks, including image-to-video and video-to-video generation, using large-scale base models such as CogVideoX-5B and Wan-14B. Extensive experiments show that TIC-FT outperforms existing baselines in both condition fidelity and visual quality, while remaining highly efficient in both training and inference. For additional results, visit https://kinam0252.github.io/TIC-FT/
PDF343June 3, 2025