JAM-Flow:基於流匹配的聯合音頻-運動合成
JAM-Flow: Joint Audio-Motion Synthesis with Flow Matching
June 30, 2025
作者: Mingi Kwon, Joonghyuk Shin, Jaeseok Jung, Jaesik Park, Youngjung Uh
cs.AI
摘要
在生成建模中,面部运动与语音之间的内在联系常被忽视,其中说话头合成与文本转语音(TTS)通常被视为独立任务。本文介绍了JAM-Flow,一个统一框架,能够同时合成并基于面部运动和语音进行条件生成。我们的方法利用了流匹配技术和一种新颖的多模态扩散变换器(MM-DiT)架构,该架构集成了专门的运动-DiT和音频-DiT模块。这些模块通过选择性联合注意力层相连接,并采用了关键架构选择,如时间对齐的位置编码和局部联合注意力掩码,以实现有效的跨模态交互,同时保留各模态的特定优势。通过以修复式目标进行训练,JAM-Flow支持广泛的输入条件——包括文本、参考音频和参考运动——在一个单一、连贯的模型内,促进了诸如从文本生成同步说话头、音频驱动动画等多种任务。JAM-Flow通过为整体音视频合成提供实用解决方案,显著推进了多模态生成建模的发展。项目页面:https://joonghyuk.com/jamflow-web
English
The intrinsic link between facial motion and speech is often overlooked in
generative modeling, where talking head synthesis and text-to-speech (TTS) are
typically addressed as separate tasks. This paper introduces JAM-Flow, a
unified framework to simultaneously synthesize and condition on both facial
motion and speech. Our approach leverages flow matching and a novel Multi-Modal
Diffusion Transformer (MM-DiT) architecture, integrating specialized Motion-DiT
and Audio-DiT modules. These are coupled via selective joint attention layers
and incorporate key architectural choices, such as temporally aligned
positional embeddings and localized joint attention masking, to enable
effective cross-modal interaction while preserving modality-specific strengths.
Trained with an inpainting-style objective, JAM-Flow supports a wide array of
conditioning inputs-including text, reference audio, and reference
motion-facilitating tasks such as synchronized talking head generation from
text, audio-driven animation, and much more, within a single, coherent model.
JAM-Flow significantly advances multi-modal generative modeling by providing a
practical solution for holistic audio-visual synthesis. project page:
https://joonghyuk.com/jamflow-web