ChatPaper.aiChatPaper

MLLM 能否看見?動態校正解碼用於幻覺緩解

MLLM can see? Dynamic Correction Decoding for Hallucination Mitigation

October 15, 2024
作者: Chenxi Wang, Xiang Chen, Ningyu Zhang, Bozhong Tian, Haoming Xu, Shumin Deng, Huajun Chen
cs.AI

摘要

多模式大型語言模型(MLLMs)經常出現幻覺現象,但其根本原因仍不明確。本文通過實證分析發現,儘管 MLLMs 在最終輸出中錯誤生成物件,但它們實際上能夠識別在前幾層中的視覺物件。我們推測這可能是由於語言模型的強知識先驗抑制了視覺信息,導致幻覺。受此啟發,我們提出了一種新的動態校正解碼方法,適用於 MLLMs(DeCo),該方法能夠自適應地選擇適當的前幾層,並將知識比例地整合到最終層以調整輸出的 logits。需要注意的是,DeCo 是模型不可知的,可以無縫地與各種經典解碼策略結合,並應用於不同的 MLLMs。我們在廣泛使用的基準測試上評估了 DeCo,顯示它能夠將幻覺率大幅降低,相較於基準線,凸顯了其減輕幻覺的潛力。代碼可在 https://github.com/zjunlp/DeCo 找到。
English
Multimodal Large Language Models (MLLMs) frequently exhibit hallucination phenomena, but the underlying reasons remain poorly understood. In this paper, we present an empirical analysis and find that, although MLLMs incorrectly generate the objects in the final output, they are actually able to recognize visual objects in the preceding layers. We speculate that this may be due to the strong knowledge priors of the language model suppressing the visual information, leading to hallucinations. Motivated by this, we propose a novel dynamic correction decoding method for MLLMs (DeCo), which adaptively selects the appropriate preceding layers and proportionally integrates knowledge into the final layer to adjust the output logits. Note that DeCo is model agnostic and can be seamlessly incorporated with various classic decoding strategies and applied to different MLLMs. We evaluate DeCo on widely-used benchmarks, demonstrating that it can reduce hallucination rates by a large margin compared to baselines, highlighting its potential to mitigate hallucinations. Code is available at https://github.com/zjunlp/DeCo.

Summary

AI-Generated Summary

PDF272November 16, 2024