OmniHuman-1.5:通过认知模拟为虚拟化身注入主动思维
OmniHuman-1.5: Instilling an Active Mind in Avatars via Cognitive Simulation
August 26, 2025
作者: Jianwen Jiang, Weihong Zeng, Zerong Zheng, Jiaqi Yang, Chao Liang, Wang Liao, Han Liang, Yuan Zhang, Mingyuan Gao
cs.AI
摘要
现有视频虚拟人模型虽能生成流畅的人体动画,却难以突破单纯的外形相似,捕捉角色的真实神韵。其动作通常仅与音频节奏等低层次线索同步,缺乏对情感、意图或语境的深层语义理解。为弥合这一差距,我们提出了一种框架,旨在生成不仅物理上合理,而且语义连贯且富有表现力的角色动画。我们的模型OmniHuman-1.5,基于两项关键技术贡献构建。首先,我们利用多模态大语言模型合成条件结构化的文本表示,提供高层次语义指导。这一指导使我们的动作生成器超越简单的节奏同步,能够生成与情境和情感共鸣的动作。其次,为确保多模态输入的有效融合并缓解模态间冲突,我们引入了一种带有新颖伪末帧设计的专用多模态DiT架构。这些组件的协同作用使我们的模型能准确解读音频、图像和文本的联合语义,从而生成与角色、场景及语言内容深度一致的动作。大量实验表明,我们的模型在包括口型同步精度、视频质量、动作自然度及与文本提示的语义一致性在内的综合指标上均达到领先水平。此外,我们的方法在复杂场景中展现出卓越的扩展性,如涉及多人及非人类主体的场景。主页:https://omnihuman-lab.github.io/v1_5/
English
Existing video avatar models can produce fluid human animations, yet they
struggle to move beyond mere physical likeness to capture a character's
authentic essence. Their motions typically synchronize with low-level cues like
audio rhythm, lacking a deeper semantic understanding of emotion, intent, or
context. To bridge this gap, we propose a framework designed to
generate character animations that are not only physically plausible but also
semantically coherent and expressive. Our model, OmniHuman-1.5, is
built upon two key technical contributions. First, we leverage Multimodal Large
Language Models to synthesize a structured textual representation of conditions
that provides high-level semantic guidance. This guidance steers our motion
generator beyond simplistic rhythmic synchronization, enabling the production
of actions that are contextually and emotionally resonant. Second, to ensure
the effective fusion of these multimodal inputs and mitigate inter-modality
conflicts, we introduce a specialized Multimodal DiT architecture with a novel
Pseudo Last Frame design. The synergy of these components allows our model to
accurately interpret the joint semantics of audio, images, and text, thereby
generating motions that are deeply coherent with the character, scene, and
linguistic content. Extensive experiments demonstrate that our model achieves
leading performance across a comprehensive set of metrics, including lip-sync
accuracy, video quality, motion naturalness and semantic consistency with
textual prompts. Furthermore, our approach shows remarkable extensibility to
complex scenarios, such as those involving multi-person and non-human subjects.
Homepage: https://omnihuman-lab.github.io/v1_5/