MIDAS:基于实时自回归视频生成的多模态交互式数字人合成
MIDAS: Multimodal Interactive Digital-human Synthesis via Real-time Autoregressive Video Generation
August 26, 2025
作者: Ming Chen, Liyuan Cui, Wenyuan Zhang, Haoxian Zhang, Yan Zhou, Xiaohan Li, Xiaoqiang Liu, Pengfei Wan
cs.AI
摘要
近期,交互式数字人视频生成技术引起了广泛关注并取得了显著进展。然而,构建一个能够实时响应多样化输入信号的实用系统,对现有方法而言仍具挑战性,这些方法常面临高延迟、高计算成本及有限可控性等问题。本研究提出了一种自回归视频生成框架,该框架支持多模态交互控制,并能以流式方式进行低延迟外推。通过对标准大型语言模型(LLM)进行最小化修改,我们的框架能够接收包括音频、姿态和文本在内的多模态条件编码,并输出空间与语义一致的表示,以指导扩散头的去噪过程。为此,我们从多源构建了一个约20,000小时的大规模对话数据集,为训练提供了丰富的对话场景。此外,我们引入了一种深度压缩自编码器,其压缩比高达64倍,有效缓解了自回归模型在长序列推理上的负担。在双工对话、多语言人像合成及交互式世界模型上的大量实验,凸显了本方法在低延迟、高效率和细粒度多模态可控性方面的优势。
English
Recently, interactive digital human video generation has attracted widespread
attention and achieved remarkable progress. However, building such a practical
system that can interact with diverse input signals in real time remains
challenging to existing methods, which often struggle with high latency, heavy
computational cost, and limited controllability. In this work, we introduce an
autoregressive video generation framework that enables interactive multimodal
control and low-latency extrapolation in a streaming manner. With minimal
modifications to a standard large language model (LLM), our framework accepts
multimodal condition encodings including audio, pose, and text, and outputs
spatially and semantically coherent representations to guide the denoising
process of a diffusion head. To support this, we construct a large-scale
dialogue dataset of approximately 20,000 hours from multiple sources, providing
rich conversational scenarios for training. We further introduce a deep
compression autoencoder with up to 64times reduction ratio, which
effectively alleviates the long-horizon inference burden of the autoregressive
model. Extensive experiments on duplex conversation, multilingual human
synthesis, and interactive world model highlight the advantages of our approach
in low latency, high efficiency, and fine-grained multimodal controllability.