ChatPaper.aiChatPaper

LIBERTy:基于结构反事实的LLM概念解释基准因果框架

LIBERTy: A Causal Framework for Benchmarking Concept-Based Explanations of LLMs with Structural Counterfactuals

January 15, 2026
作者: Gilat Toker, Nitay Calderon, Ohad Amosy, Roi Reichart
cs.AI

摘要

基于概念的解释方法能够量化高层次概念(如性别或资历)对模型行为的影响,这对高风险领域的决策者至关重要。近期研究通过将此类解释与基于反事实估计的参考因果效应进行比较,来评估其忠实度。实践中,现有基准依赖成本高昂的人工撰写反事实作为不完美的代理指标。为此,我们提出了构建包含结构化反事实对的数据集框架:LIBERTy(基于LLM的可解释性干预基准参考目标)。该框架以明确定义的文本生成结构化因果模型为基础,对概念的干预会通过SCM传播,直至LLM生成反事实。我们发布了三个数据集(疾病检测、简历筛选和工作场所暴力预测)及新评估指标"顺序忠实度"。基于这些资源,我们在五个模型上评估了多种方法,发现基于概念的解释方法存在显著改进空间。LIBERTy还能系统分析模型对干预的敏感性:我们发现专有LLMs对人口统计概念的敏感性明显降低,这很可能源于训练后的缓解措施。总体而言,LIBERTy为开发可信的可解释性方法提供了亟需的基准框架。
English
Concept-based explanations quantify how high-level concepts (e.g., gender or experience) influence model behavior, which is crucial for decision-makers in high-stakes domains. Recent work evaluates the faithfulness of such explanations by comparing them to reference causal effects estimated from counterfactuals. In practice, existing benchmarks rely on costly human-written counterfactuals that serve as an imperfect proxy. To address this, we introduce a framework for constructing datasets containing structural counterfactual pairs: LIBERTy (LLM-based Interventional Benchmark for Explainability with Reference Targets). LIBERTy is grounded in explicitly defined Structured Causal Models (SCMs) of the text generation, interventions on a concept propagate through the SCM until an LLM generates the counterfactual. We introduce three datasets (disease detection, CV screening, and workplace violence prediction) together with a new evaluation metric, order-faithfulness. Using them, we evaluate a wide range of methods across five models and identify substantial headroom for improving concept-based explanations. LIBERTy also enables systematic analysis of model sensitivity to interventions: we find that proprietary LLMs show markedly reduced sensitivity to demographic concepts, likely due to post-training mitigation. Overall, LIBERTy provides a much-needed benchmark for developing faithful explainability methods.
PDF11January 22, 2026