ChatPaper.aiChatPaper

LLaDA-V:基于视觉指令微调的大规模语言扩散模型

LLaDA-V: Large Language Diffusion Models with Visual Instruction Tuning

May 22, 2025
作者: Zebin You, Shen Nie, Xiaolu Zhang, Jun Hu, Jun Zhou, Zhiwu Lu, Ji-Rong Wen, Chongxuan Li
cs.AI

摘要

在本研究中,我们提出了LLaDA-V,一种纯粹基于扩散模型的多模态大语言模型(MLLM),它通过将视觉指令微调与掩码扩散模型相结合,标志着对当前多模态方法中占主导地位的自回归范式的突破。LLaDA-V建立在LLaDA这一代表性的大语言扩散模型基础之上,整合了视觉编码器及MLP连接器,后者将视觉特征映射至语言嵌入空间,从而实现了有效的多模态对齐。我们的实证研究揭示了几项引人注目的发现:首先,尽管LLaDA-V在纯文本任务上的表现弱于LLaMA3-8B和Qwen2-7B等同类模型,但其在多模态任务中展现出了令人期待的性能。在相同指令数据训练下,LLaDA-V在多模态任务中与LLaMA3-V竞争激烈,且具备更优的数据扩展性,同时缩小了与Qwen2-VL的性能差距,这验证了其架构在多模态任务中的有效性。其次,与现有的混合自回归-扩散模型及纯扩散型MLLM相比,LLaDA-V在多模态理解任务中达到了最先进的性能水平。我们的研究结果表明,大语言扩散模型在多模态情境下展现出潜力,值得在未来的研究中进一步探索。项目页面及代码详见:https://ml-gsai.github.io/LLaDA-V-demo/。
English
In this work, we introduce LLaDA-V, a purely diffusion-based Multimodal Large Language Model (MLLM) that integrates visual instruction tuning with masked diffusion models, representing a departure from the autoregressive paradigms dominant in current multimodal approaches. Built upon LLaDA, a representative large language diffusion model, LLaDA-V incorporates a vision encoder and MLP connector that projects visual features into the language embedding space, enabling effective multimodal alignment. Our empirical investigation reveals several intriguing results: First, LLaDA-V demonstrates promising multimodal performance despite its language model being weaker on purely textual tasks than counterparts like LLaMA3-8B and Qwen2-7B. When trained on the same instruction data, LLaDA-V is highly competitive to LLaMA3-V across multimodal tasks with better data scalability. It also narrows the performance gap to Qwen2-VL, suggesting the effectiveness of its architecture for multimodal tasks. Second, LLaDA-V achieves state-of-the-art performance in multimodal understanding compared to existing hybrid autoregressive-diffusion and purely diffusion-based MLLMs. Our findings suggest that large language diffusion models show promise in multimodal contexts and warrant further investigation in future research. Project page and codes: https://ml-gsai.github.io/LLaDA-V-demo/.

Summary

AI-Generated Summary

PDF223May 23, 2025