MNAFT:面向图像翻译的多模态大语言模型模态神经元感知微调
MNAFT: modality neuron-aware fine-tuning of multimodal large language models for image translation
April 18, 2026
作者: Bo Li, Ningyuan Deng, Tianyu Dong, Shaobo Wang, Shaolin Zhu, Lijie Wen
cs.AI
摘要
多模态大语言模型(MLLMs)虽展现出卓越能力,但在捕捉图像中对于精确图像翻译至关重要的细粒度文本信息方面仍存在不足,这常导致视觉文本输入与文本输入/输出之间产生模态鸿沟。现有方法主要依赖指令微调,可能导致预训练知识的参数冗余,从而影响泛化性能。为此,我们提出模态神经元感知微调(MNAFT),这一创新方法通过利用MLLMs中单个神经元在图像翻译中的专化功能来提升性能。MNAFT通过指令驱动的激活分析,识别视觉与语言模块中的语言无关神经元和语言特定神经元,并评估其在各类翻译任务中的重要性。随后我们实施选择性微调,仅更新与目标任务相关的选定层中语言特定及语言无关神经元的参数,同时保留其他神经元和层中编码的知识。我们在多个基准测试上的广泛实验表明,MNAFT显著优于当前最先进的图像翻译方法,包括级联模型、标准全参数微调及参数高效调优技术。此外,我们通过神经元激活可视化与聚类模式分析等系统性研究,揭示了不同神经元群在协调跨模态理解与促进精准语言特定翻译中的作用机制。
English
Multimodal large language models (MLLMs) have shown impressive capabilities, yet they often struggle to effectively capture the fine-grained textual information within images crucial for accurate image translation. This often leads to a modality gap between visual text inputs and textual inputs/outputs for image translation. Existing methods, primarily relying on instruction fine-tuning, risk parameter redundancy of pre-trained knowledge, hindering generalization performance. To address this, we introduce modality neuron-aware fine-tuning (MNAFT), a novel approach that takes advantage of the specialized roles of individual neurons within MLLMs for enhanced image translation. MNAFT identifies language-agnostic and language-specific neurons in both vision and language modules through an instruction-driven activation analysis, evaluating their importance in various translation tasks. We then perform selective fine-tuning, updating only the parameters of language-specific and language-agnostic neurons within the selected layers relevant to the target task, while preserving the knowledge encoded in other neurons and layers. Our extensive experiments on multiple benchmarks demonstrate that MNAFT significantly outperforms state-of-the-art image translation methods, including cascaded models, standard full fine-tuning, and parameter-efficient tuning techniques. Furthermore, we provide comprehensive analysis, including visualizations of neuron activations and clustering patterns, to offer insights into the roles of different neuron groups in mediating cross-modal understanding and facilitating accurate language-specific translation.