强化学习微调视觉语言模型的鲁棒性与思维链一致性研究
On Robustness and Chain-of-Thought Consistency of RL-Finetuned VLMs
February 13, 2026
作者: Rosie Zhao, Anshul Shah, Xiaoyu Zhu, Xinke Deng, Zhongyu Jiang, Yang Yang, Joerg Liebelt, Arnab Mondal
cs.AI
摘要
強化學習(RL)微調技術已成為提升大型語言模型(LLMs)在推理密集型任務表現的關鍵方法,這一成功經驗正推動其向視覺語言模型(VLMs)拓展。儘管經過RL微調的VLM在視覺推理基準測試中表現有所提升,但它們仍存在視覺基礎薄弱、幻覺問題以及過度依賴文本線索的缺陷。我們的研究表明,簡單的受控文本干擾(如誤導性圖說或錯誤的思維鏈(CoT)軌跡)會顯著削弱模型的魯棒性與置信度,且當考慮開源多模態推理模型中的CoT一致性時,這種負面影響更為突出。基於熵的度量指標進一步揭示,這些干擾會重塑模型對正確選項的不確定性與概率分佈,暴露出不同模型在校準失準方面的特異性趨勢。為深入理解這些脆弱性,我們進一步分析RL微調的動態過程,發現了準確性與忠實度之間的權衡:微調雖能提升基準測試準確率,卻可能同時削弱伴隨生成的CoT可靠性及其對上下文變化的適應力。儘管對抗性增強能提升魯棒性,但僅靠該方法無法避免忠實度偏移。引入關注忠實度的獎勵機制可恢復答案與推理過程的一致性,但若與增強技術結合使用,訓練可能塌縮為依賴捷徑策略,且魯棒性仍難以保障。這些發現共同凸顯了僅以準確率作為評估標準的局限性,並呼籲建立同時強調正確性、魯棒性及視覺基礎推理忠實度的訓練與評估框架。
English
Reinforcement learning (RL) fine-tuning has become a key technique for enhancing large language models (LLMs) on reasoning-intensive tasks, motivating its extension to vision language models (VLMs). While RL-tuned VLMs improve on visual reasoning benchmarks, they remain vulnerable to weak visual grounding, hallucinations, and over-reliance on textual cues. We show that simple, controlled textual perturbations--misleading captions or incorrect chain-of-thought (CoT) traces--cause substantial drops in robustness and confidence, and that these effects are more pronounced when CoT consistency is taken into account across open-source multimodal reasoning models. Entropy-based metrics further show that these perturbations reshape model uncertainty and probability mass on the correct option, exposing model-specific trends in miscalibration. To better understand these vulnerabilities, we further analyze RL fine-tuning dynamics and uncover an accuracy-faithfulness trade-off: fine-tuning raises benchmark accuracy, but can simultaneously erode the reliability of the accompanying CoT and its robustness to contextual shifts. Although adversarial augmentation improves robustness, it does not by itself prevent faithfulness drift. Incorporating a faithfulness-aware reward can restore alignment between answers and reasoning, but when paired with augmentation, training risks collapsing onto shortcut strategies and robustness remains elusive. Together, these findings highlight the limitations of accuracy-only evaluations and motivate training and assessment protocols that jointly emphasize correctness, robustness, and the faithfulness of visually grounded reasoning.