AnyTalker:透過互動性優化實現多人對話影片生成的規模化
AnyTalker: Scaling Multi-Person Talking Video Generation with Interactivity Refinement
November 28, 2025
作者: Zhizhou Zhong, Yicheng Ji, Zhe Kong, Yiying Liu, Jiarui Wang, Jiasun Feng, Lupeng Liu, Xiangyi Wang, Yanjia Li, Yuqing She, Ying Qin, Huan Li, Shuiyang Mao, Wei Liu, Wenhan Luo
cs.AI
摘要
近期,多人影片生成技術開始嶄露頭角。雖然已有初步研究探索音訊驅動的多人對話影片生成,但由於多元多人資料收集成本高昂,以及驅動多重身份實現連貫互動性存在困難,這些方法往往面臨挑戰。為解決這些難題,我們提出AnyTalker——一個具備可擴展多流處理架構的多人生成框架。具體而言,我們透過創新的身份感知注意力機制擴展了Diffusion Transformer的注意力模組,該機制能迭代處理身份-音訊配對,實現可驅動身份的任意擴展。此外,訓練多人生成模型需要海量多人資料,而我們提出的訓練流程僅需單人影片即可學習多人說話模式,並僅用少量真實多人片段精煉互動表現。我們還設計了專用評估指標與資料集,用於評測生成多人影片的自然度與互動性。大量實驗表明,AnyTalker在唇部同步、視覺品質和自然互動性方面表現卓越,在資料成本與身份擴展性之間實現了優異平衡。
English
Recently, multi-person video generation has started to gain prominence. While a few preliminary works have explored audio-driven multi-person talking video generation, they often face challenges due to the high costs of diverse multi-person data collection and the difficulty of driving multiple identities with coherent interactivity. To address these challenges, we propose AnyTalker, a multi-person generation framework that features an extensible multi-stream processing architecture. Specifically, we extend Diffusion Transformer's attention block with a novel identity-aware attention mechanism that iteratively processes identity-audio pairs, allowing arbitrary scaling of drivable identities. Besides, training multi-person generative models demands massive multi-person data. Our proposed training pipeline depends solely on single-person videos to learn multi-person speaking patterns and refines interactivity with only a few real multi-person clips. Furthermore, we contribute a targeted metric and dataset designed to evaluate the naturalness and interactivity of the generated multi-person videos. Extensive experiments demonstrate that AnyTalker achieves remarkable lip synchronization, visual quality, and natural interactivity, striking a favorable balance between data costs and identity scalability.