OmniHuman-1.5:透過認知模擬為虛擬化身注入主動思維
OmniHuman-1.5: Instilling an Active Mind in Avatars via Cognitive Simulation
August 26, 2025
作者: Jianwen Jiang, Weihong Zeng, Zerong Zheng, Jiaqi Yang, Chao Liang, Wang Liao, Han Liang, Yuan Zhang, Mingyuan Gao
cs.AI
摘要
現有的視頻化身模型雖能生成流暢的人體動畫,卻難以超越單純的形似,捕捉角色的真實本質。其動作通常僅與音頻節奏等低層次線索同步,缺乏對情感、意圖或語境的深層語義理解。為彌合這一差距,我們提出了一個框架,旨在生成不僅物理上合理,而且語義連貫且富有表現力的角色動畫。我們的模型OmniHuman-1.5基於兩項關鍵技術貢獻。首先,我們利用多模態大語言模型合成條件結構化文本表示,提供高層次語義指導。這一指導使我們的動作生成器超越了簡單的節奏同步,能夠生成與上下文和情感共鳴的動作。其次,為確保這些多模態輸入的有效融合並緩解模態間衝突,我們引入了一種專用的多模態DiT架構,配備新穎的偽最後幀設計。這些組件的協同作用使我們的模型能夠準確解讀音頻、圖像和文本的聯合語義,從而生成與角色、場景及語言內容深度一致的動作。大量實驗表明,我們的模型在包括唇形同步精度、視頻質量、動作自然度及與文本提示的語義一致性在內的綜合指標上均取得了領先性能。此外,我們的方法在涉及多人及非人主體的複雜場景中展現出顯著的可擴展性。主頁:https://omnihuman-lab.github.io/v1_5/
English
Existing video avatar models can produce fluid human animations, yet they
struggle to move beyond mere physical likeness to capture a character's
authentic essence. Their motions typically synchronize with low-level cues like
audio rhythm, lacking a deeper semantic understanding of emotion, intent, or
context. To bridge this gap, we propose a framework designed to
generate character animations that are not only physically plausible but also
semantically coherent and expressive. Our model, OmniHuman-1.5, is
built upon two key technical contributions. First, we leverage Multimodal Large
Language Models to synthesize a structured textual representation of conditions
that provides high-level semantic guidance. This guidance steers our motion
generator beyond simplistic rhythmic synchronization, enabling the production
of actions that are contextually and emotionally resonant. Second, to ensure
the effective fusion of these multimodal inputs and mitigate inter-modality
conflicts, we introduce a specialized Multimodal DiT architecture with a novel
Pseudo Last Frame design. The synergy of these components allows our model to
accurately interpret the joint semantics of audio, images, and text, thereby
generating motions that are deeply coherent with the character, scene, and
linguistic content. Extensive experiments demonstrate that our model achieves
leading performance across a comprehensive set of metrics, including lip-sync
accuracy, video quality, motion naturalness and semantic consistency with
textual prompts. Furthermore, our approach shows remarkable extensibility to
complex scenarios, such as those involving multi-person and non-human subjects.
Homepage: https://omnihuman-lab.github.io/v1_5/