ChatPaper.aiChatPaper

Concat-ID:邁向通用身份保持的視頻合成

Concat-ID: Towards Universal Identity-Preserving Video Synthesis

March 18, 2025
作者: Yong Zhong, Zhuoyi Yang, Jiayan Teng, Xiaotao Gu, Chongxuan Li
cs.AI

摘要

我們提出了Concat-ID,這是一個用於身份保持視頻生成的統一框架。Concat-ID利用變分自編碼器提取圖像特徵,這些特徵沿序列維度與視頻潛在變量進行拼接,僅依賴於3D自注意力機制而無需額外模塊。我們引入了一種新穎的跨視頻配對策略和多階段訓練方案,以在增強視頻自然度的同時平衡身份一致性和面部可編輯性。大量實驗證明,Concat-ID在單一身份和多身份生成方面均優於現有方法,並且在多主體場景(如虛擬試穿和背景可控生成)中展現出無縫的擴展能力。Concat-ID為身份保持視頻合成設立了新基準,為廣泛應用提供了一個多功能且可擴展的解決方案。
English
We present Concat-ID, a unified framework for identity-preserving video generation. Concat-ID employs Variational Autoencoders to extract image features, which are concatenated with video latents along the sequence dimension, leveraging solely 3D self-attention mechanisms without the need for additional modules. A novel cross-video pairing strategy and a multi-stage training regimen are introduced to balance identity consistency and facial editability while enhancing video naturalness. Extensive experiments demonstrate Concat-ID's superiority over existing methods in both single and multi-identity generation, as well as its seamless scalability to multi-subject scenarios, including virtual try-on and background-controllable generation. Concat-ID establishes a new benchmark for identity-preserving video synthesis, providing a versatile and scalable solution for a wide range of applications.

Summary

AI-Generated Summary

PDF102March 19, 2025