LIBERTy:基於結構性反事實的LLM概念解釋基準因果框架
LIBERTy: A Causal Framework for Benchmarking Concept-Based Explanations of LLMs with Structural Counterfactuals
January 15, 2026
作者: Gilat Toker, Nitay Calderon, Ohad Amosy, Roi Reichart
cs.AI
摘要
基于概念的解释方法能够量化高层概念(如性别或经验)对模型行为的影响程度,这对高风险领域的决策者至关重要。近期研究通过将此类解释与基于反事实估计的参考因果效应进行比较,来评估解释的忠实度。实践中,现有基准依赖成本高昂的人工编写反事实作为不完善的代理指标。为此,我们提出了构建包含结构性反事实对的数据集框架:LIBERTy(基于LLM的可解释性干预基准参考目标)。该框架以明确定义的文本生成结构化因果模型为基础,对概念的干预会通过SCM传播,直至LLM生成反事实文本。我们发布了三个数据集(疾病检测、简历筛选和工作场所暴力预测)及新型评估指标"顺序忠实度"。通过在这些数据集上评估五大模型的多种方法,我们发现基于概念的解释方法存在显著改进空间。LIBERTy还能系统分析模型对干预的敏感性:研究发现,由于后训练缓解措施,专有LLM对人口统计概念的敏感性明显降低。总体而言,LIBERTy为开发忠实可靠的可解释性方法提供了亟需的基准平台。
English
Concept-based explanations quantify how high-level concepts (e.g., gender or experience) influence model behavior, which is crucial for decision-makers in high-stakes domains. Recent work evaluates the faithfulness of such explanations by comparing them to reference causal effects estimated from counterfactuals. In practice, existing benchmarks rely on costly human-written counterfactuals that serve as an imperfect proxy. To address this, we introduce a framework for constructing datasets containing structural counterfactual pairs: LIBERTy (LLM-based Interventional Benchmark for Explainability with Reference Targets). LIBERTy is grounded in explicitly defined Structured Causal Models (SCMs) of the text generation, interventions on a concept propagate through the SCM until an LLM generates the counterfactual. We introduce three datasets (disease detection, CV screening, and workplace violence prediction) together with a new evaluation metric, order-faithfulness. Using them, we evaluate a wide range of methods across five models and identify substantial headroom for improving concept-based explanations. LIBERTy also enables systematic analysis of model sensitivity to interventions: we find that proprietary LLMs show markedly reduced sensitivity to demographic concepts, likely due to post-training mitigation. Overall, LIBERTy provides a much-needed benchmark for developing faithful explainability methods.