ChatPaper.aiChatPaper

文本導向向量能提升多模態大型語言模型中的視覺理解能力

Textual Steering Vectors Can Improve Visual Understanding in Multimodal Large Language Models

May 20, 2025
作者: Woody Haosheng Gan, Deqing Fu, Julian Asilis, Ollie Liu, Dani Yogatama, Vatsal Sharan, Robin Jia, Willie Neiswanger
cs.AI

摘要

導向方法已成為引導大型語言模型(LLMs)行為的有效且針對性工具,而無需修改其參數。然而,多模態大型語言模型(MLLMs)目前尚未享有相同的技術套件,部分原因在於其新近性和架構多樣性。受此差距啟發,我們探討是否可以利用僅基於文本的LLM骨幹,通過稀疏自編碼器(SAEs)、均值漂移和線性探測,來導向MLLMs。我們發現,基於文本的導向一致性地提升了跨多種MLLM架構和視覺任務的多模態準確性。特別是,均值漂移在CV-Bench上的空間關係準確性提升了高達+7.3%,計數準確性提升了高達+3.3%,超越了提示方法,並展現出對分佈外數據集的強大泛化能力。這些結果凸顯了文本導向向量作為一種強大且高效的機制,能夠以最小的額外數據收集和計算開銷來增強MLLMs的基礎能力。
English
Steering methods have emerged as effective and targeted tools for guiding large language models' (LLMs) behavior without modifying their parameters. Multimodal large language models (MLLMs), however, do not currently enjoy the same suite of techniques, due in part to their recency and architectural diversity. Inspired by this gap, we investigate whether MLLMs can be steered using vectors derived from their text-only LLM backbone, via sparse autoencoders (SAEs), mean shift, and linear probing. We find that text-derived steering consistently enhances multimodal accuracy across diverse MLLM architectures and visual tasks. In particular, mean shift boosts spatial relationship accuracy on CV-Bench by up to +7.3% and counting accuracy by up to +3.3%, outperforming prompting and exhibiting strong generalization to out-of-distribution datasets. These results highlight textual steering vectors as a powerful, efficient mechanism for enhancing grounding in MLLMs with minimal additional data collection and computational overhead.

Summary

AI-Generated Summary

PDF12May 27, 2025