ChatPaper.aiChatPaper

CAT:因果注意力调优——将细粒度因果知识注入大型语言模型

CAT: Causal Attention Tuning For Injecting Fine-grained Causal Knowledge into Large Language Models

September 1, 2025
作者: Kairong Han, Wenshuo Zhao, Ziyu Zhao, JunJian Ye, Lujia Pan, Kun Kuang
cs.AI

摘要

大型语言模型(LLMs)在多个领域取得了显著成功。然而,一个根本性问题依然存在:LLMs能否有效利用因果知识进行预测和生成?通过实证研究,我们发现直接在大规模数据上训练的LLMs往往捕捉到的是虚假相关性而非真实的因果关系,这导致其表现欠佳,尤其是在分布外(OOD)场景中。为应对这一挑战,我们提出了因果注意力调优(Causal Attention Tuning, CAT),这是一种将细粒度因果知识注入注意力机制的新方法。我们设计了一个自动化流程,利用人类先验知识自动生成令牌级别的因果信号,并引入重注意力机制来指导训练,帮助模型聚焦于因果结构,同时减少注意力分数中的噪声和偏差。在我们提出的虚假令牌游戏(Spurious Token Game, STG)基准测试及多个下游任务上的实验结果表明,我们的方法能有效利用因果知识进行预测,并在OOD场景中保持鲁棒性。具体实现细节可访问https://github.com/Kairong-Han/CAT。
English
Large Language Models (LLMs) have achieved remarkable success across various domains. However, a fundamental question remains: Can LLMs effectively utilize causal knowledge for prediction and generation? Through empirical studies, we find that LLMs trained directly on large-scale data often capture spurious correlations rather than true causal relationships, leading to suboptimal performance, especially in out-of-distribution (OOD) scenarios. To address this challenge, we propose Causal Attention Tuning (CAT), a novel approach that injects fine-grained causal knowledge into the attention mechanism. We propose an automated pipeline that leverages human priors to automatically generate token-level causal signals and introduce the Re-Attention mechanism to guide training, helping the model focus on causal structures while mitigating noise and biases in attention scores. Experimental results on our proposed Spurious Token Game (STG) benchmark and multiple downstream tasks demonstrate that our approach effectively leverages causal knowledge for prediction and remains robust in OOD scenarios. Implementation details can be found at https://github.com/Kairong-Han/CAT.
PDF43September 15, 2025