ChatPaper.aiChatPaper

CAT:因果注意力調校——將細粒度因果知識注入大型語言模型

CAT: Causal Attention Tuning For Injecting Fine-grained Causal Knowledge into Large Language Models

September 1, 2025
作者: Kairong Han, Wenshuo Zhao, Ziyu Zhao, JunJian Ye, Lujia Pan, Kun Kuang
cs.AI

摘要

大型語言模型(LLMs)在多個領域取得了顯著成功。然而,一個根本性問題仍然存在:LLMs能否有效利用因果知識進行預測和生成?通過實證研究,我們發現直接在大規模數據上訓練的LLMs往往捕捉到的是虛假相關性而非真實的因果關係,這導致了性能欠佳,尤其是在分佈外(OOD)場景中。為應對這一挑戰,我們提出了因果注意力調節(Causal Attention Tuning, CAT),這是一種新穎的方法,將細粒度的因果知識注入注意力機制中。我們提出了一個自動化流程,利用人類先驗知識自動生成詞元級別的因果信號,並引入了重注意力機制來指導訓練,幫助模型聚焦於因果結構,同時減輕注意力分數中的噪聲和偏差。在我們提出的虛假詞元遊戲(Spurious Token Game, STG)基準測試和多個下游任務上的實驗結果表明,我們的方法有效利用了因果知識進行預測,並在OOD場景中保持了魯棒性。實現細節可參見https://github.com/Kairong-Han/CAT。
English
Large Language Models (LLMs) have achieved remarkable success across various domains. However, a fundamental question remains: Can LLMs effectively utilize causal knowledge for prediction and generation? Through empirical studies, we find that LLMs trained directly on large-scale data often capture spurious correlations rather than true causal relationships, leading to suboptimal performance, especially in out-of-distribution (OOD) scenarios. To address this challenge, we propose Causal Attention Tuning (CAT), a novel approach that injects fine-grained causal knowledge into the attention mechanism. We propose an automated pipeline that leverages human priors to automatically generate token-level causal signals and introduce the Re-Attention mechanism to guide training, helping the model focus on causal structures while mitigating noise and biases in attention scores. Experimental results on our proposed Spurious Token Game (STG) benchmark and multiple downstream tasks demonstrate that our approach effectively leverages causal knowledge for prediction and remains robust in OOD scenarios. Implementation details can be found at https://github.com/Kairong-Han/CAT.
PDF43January 19, 2026