ChatPaper.aiChatPaper

標記的隱藏生活:透過視覺資訊引導減少大型視覺語言模型的幻覺

The Hidden Life of Tokens: Reducing Hallucination of Large Vision-Language Models via Visual Information Steering

February 5, 2025
作者: Zhuowei Li, Haizhou Shi, Yunhe Gao, Di Liu, Zhenting Wang, Yuxiao Chen, Ting Liu, Long Zhao, Hao Wang, Dimitris N. Metaxas
cs.AI

摘要

大型視覺語言模型(LVLMs)能夠有效地推理文本和視覺輸入,但它們往往會產生在語法上連貫但在視覺上無依據的內容。本文通過檢視生成過程中的標記logits排名,探究幻覺的內部動態,揭示了LVLMs處理信息的三個關鍵模式:(1)逐漸失去視覺信息 - 在生成過程中,具有視覺依據的標記逐漸變得不被偏好,以及(2)早期激發 - 在各層中,具有語義意義的標記比最終層更早達到峰值激活。 (3)隱藏的真實信息 - 具有視覺依據的標記雖然最終未被選擇,但在推論時仍保持相對較高的排名。基於這些見解,我們提出了VISTA(使用標記logit增強的視覺信息引導),這是一個無需訓練的推論時干預框架,可減少幻覺並促進真實信息。VISTA通過結合兩種互補方法來工作:在激活空間中強化視覺信息,並利用早期層的激活來促進具有語義意義的解碼。與現有方法相比,VISTA無需外部監督,適用於各種解碼策略。大量實驗表明,VISTA平均可將評估的開放式生成任務中的幻覺減少約40%,並且在三種解碼策略下,它在四個基準測試中始絈優於現有方法。
English
Large Vision-Language Models (LVLMs) can reason effectively over both textual and visual inputs, but they tend to hallucinate syntactically coherent yet visually ungrounded contents. In this paper, we investigate the internal dynamics of hallucination by examining the tokens logits rankings throughout the generation process, revealing three key patterns in how LVLMs process information: (1) gradual visual information loss -- visually grounded tokens gradually become less favored throughout generation, and (2) early excitation -- semantically meaningful tokens achieve peak activation in the layers earlier than the final layer. (3) hidden genuine information -- visually grounded tokens though not being eventually decided still retain relatively high rankings at inference. Based on these insights, we propose VISTA (Visual Information Steering with Token-logit Augmentation), a training-free inference-time intervention framework that reduces hallucination while promoting genuine information. VISTA works by combining two complementary approaches: reinforcing visual information in activation space and leveraging early layer activations to promote semantically meaningful decoding. Compared to existing methods, VISTA requires no external supervision and is applicable to various decoding strategies. Extensive experiments show that VISTA on average reduces hallucination by abount 40% on evaluated open-ended generation task, and it consistently outperforms existing methods on four benchmarks across four architectures under three decoding strategies.

Summary

AI-Generated Summary

PDF123February 11, 2025