ChatPaper.aiChatPaper

Klear:统一多任务音视频联合生成

Klear: Unified Multi-Task Audio-Video Joint Generation

January 7, 2026
作者: Jun Wang, Chunyu Qiang, Yuxin Guo, Yiran Wang, Xijuan Zeng, Chen Zhang, Pengfei Wan
cs.AI

摘要

音视频联合生成技术发展迅猛,但仍面临重大挑战。非商业化方案普遍存在视听异步、唇语对齐不佳及单模态退化等问题,其根源在于视听对应建模薄弱、泛化能力有限以及高质量密集标注数据稀缺。为应对这些挑战,我们推出Klear系统,并从三大维度展开探索——模型架构、训练策略与数据构建。架构层面,我们采用统一DiT模块的单塔设计和全视角注意力机制,实现紧密的视听对齐与强大扩展性。训练策略上,我们实施渐进式多任务方案——通过随机模态掩码实现跨任务联合优化,结合多阶段课程学习,从而构建鲁棒表征、强化视听对齐的世界知识并防止单模态崩溃。数据方面,我们推出首个大规模密集标注音视频数据集,并创新性地建立自动化数据构建流程,可对数百万条多样化、高质量、严格对齐的音频-视频-文本三元组进行标注筛选。基于此,Klear能够扩展至海量数据集,在联合生成与单模态生成场景下均实现高保真度、语义与时序精准对齐的指令跟随生成,同时展现出对分布外场景的强泛化能力。在多项任务中,该系统以显著优势超越现有方法,达到与Veo 3相媲美的性能,为新一代音视频合成提供了统一且可扩展的解决方案。
English
Audio-video joint generation has progressed rapidly, yet substantial challenges still remain. Non-commercial approaches still suffer audio-visual asynchrony, poor lip-speech alignment, and unimodal degradation, which can be stemmed from weak audio-visual correspondence modeling, limited generalization, and scarce high-quality dense-caption data. To address these issues, we introduce Klear and delve into three axes--model architecture, training strategy, and data curation. Architecturally, we adopt a single-tower design with unified DiT blocks and an Omni-Full Attention mechanism, achieving tight audio-visual alignment and strong scalability. Training-wise, we adopt a progressive multitask regime--random modality masking to joint optimization across tasks, and a multistage curriculum, yielding robust representations, strengthening A-V aligned world knowledge, and preventing unimodal collapse. For datasets, we present the first large-scale audio-video dataset with dense captions, and introduce a novel automated data-construction pipeline which annotates and filters millions of diverse, high-quality, strictly aligned audio-video-caption triplets. Building on this, Klear scales to large datasets, delivering high-fidelity, semantically and temporally aligned, instruction-following generation in both joint and unimodal settings while generalizing robustly to out-of-distribution scenarios. Across tasks, it substantially outperforms prior methods by a large margin and achieves performance comparable to Veo 3, offering a unified, scalable path toward next-generation audio-video synthesis.
PDF91January 9, 2026