JavisGPT:面向音视频理解与生成的统一多模态大语言模型
JavisGPT: A Unified Multi-modal LLM for Sounding-Video Comprehension and Generation
December 28, 2025
作者: Kai Liu, Jungang Li, Yuchong Sun, Shengqiong Wu, Jianzhang Gao, Daoan Zhang, Wei Zhang, Sheng Jin, Sicheng Yu, Geng Zhan, Jiayi Ji, Fan Zhou, Liang Zheng, Shuicheng Yan, Hao Fei, Tat-Seng Chua
cs.AI
摘要
本文提出JavisGPT——首个面向音视频联合理解与生成任务的多模态大语言模型统一框架。该模型采用简洁的编码器-LLM-解码器架构,通过时空音视频融合模块SyncFusion与同步感知可学习查询机制,桥接预训练的音视频DiT生成器,实现多模态指令驱动下时序连贯的音视频理解与生成。我们设计了三阶段渐进式训练流程:多模态预训练、音视频微调与大规模指令调优,从而基于现有视觉语言模型逐步构建多模态理解与生成能力。为支撑训练,我们进一步构建了JavisInst-Omni高质量指令数据集,包含逾20万条由GPT-4o标注的音视频文本对话,覆盖多样化、多层级的理解与生成场景。在音视频理解与生成基准测试上的大量实验表明,JavisGPT显著优于现有多模态大模型,尤其在复杂时序同步任务中表现突出。
English
This paper presents JavisGPT, the first unified multimodal large language model (MLLM) for Joint Audio-Video (JAV) comprehension and generation. JavisGPT adopts a concise encoder-LLM-decoder architecture, featuring a SyncFusion module for spatio-temporal audio-video fusion and synchrony-aware learnable queries to bridge a pretrained JAV-DiT generator. This design enables temporally coherent video-audio understanding and generation from multimodal instructions. We design an effective three-stage training pipeline consisting of multimodal pretraining, audio-video fine-tuning, and large-scale instruction-tuning, to progressively build multimodal comprehension and generation from existing vision-language models. To support this, we further construct JavisInst-Omni, a high-quality instruction dataset with over 200K GPT-4o-curated audio-video-text dialogues that span diverse and multi-level comprehension and generation scenarios. Extensive experiments on JAV comprehension and generation benchmarks show that JavisGPT outperforms existing MLLMs, particularly in complex and temporally synchronized settings.