无需人工测试集评估偏见:大语言模型的概念表征视角
Evaluate Bias without Manual Test Sets: A Concept Representation Perspective for LLMs
May 21, 2025
作者: Lang Gao, Kaiyang Wan, Wei Liu, Chenxi Wang, Zirui Song, Zixiang Xu, Yanbo Wang, Veselin Stoyanov, Xiuying Chen
cs.AI
摘要
大型语言模型(LLMs)中的偏见显著削弱了其可靠性与公平性。我们聚焦于一种常见偏见形式:当模型概念空间中的两个参照概念(如情感极性“积极”与“消极”)与第三个目标概念(如评论方面)存在不对称关联时,模型会表现出非预期的偏见。例如,对“食物”的理解不应偏向任何特定情感。现有偏见评估方法通过为不同社会群体构建标注数据并测量模型在这些群体间的响应差异来评估LLMs的行为差异,这一过程需耗费大量人力且仅能捕捉有限的社会概念。为克服这些局限,我们提出了BiasLens,一种基于模型向量空间结构的无测试集偏见分析框架。BiasLens结合概念激活向量(CAVs)与稀疏自编码器(SAEs)提取可解释的概念表示,并通过测量目标概念与各参照概念间表示相似性的变化来量化偏见。即便无需标注数据,BiasLens与传统偏见评估指标也显示出高度一致性(斯皮尔曼相关系数r > 0.85)。此外,BiasLens揭示了现有方法难以检测的偏见形式。例如,在模拟临床场景中,患者的保险状态可能导致LLM产生偏见的诊断评估。总体而言,BiasLens为偏见发现提供了一种可扩展、可解释且高效的范式,为提升LLMs的公平性与透明度开辟了新途径。
English
Bias in Large Language Models (LLMs) significantly undermines their
reliability and fairness. We focus on a common form of bias: when two reference
concepts in the model's concept space, such as sentiment polarities (e.g.,
"positive" and "negative"), are asymmetrically correlated with a third, target
concept, such as a reviewing aspect, the model exhibits unintended bias. For
instance, the understanding of "food" should not skew toward any particular
sentiment. Existing bias evaluation methods assess behavioral differences of
LLMs by constructing labeled data for different social groups and measuring
model responses across them, a process that requires substantial human effort
and captures only a limited set of social concepts. To overcome these
limitations, we propose BiasLens, a test-set-free bias analysis framework based
on the structure of the model's vector space. BiasLens combines Concept
Activation Vectors (CAVs) with Sparse Autoencoders (SAEs) to extract
interpretable concept representations, and quantifies bias by measuring the
variation in representational similarity between the target concept and each of
the reference concepts. Even without labeled data, BiasLens shows strong
agreement with traditional bias evaluation metrics (Spearman correlation r >
0.85). Moreover, BiasLens reveals forms of bias that are difficult to detect
using existing methods. For example, in simulated clinical scenarios, a
patient's insurance status can cause the LLM to produce biased diagnostic
assessments. Overall, BiasLens offers a scalable, interpretable, and efficient
paradigm for bias discovery, paving the way for improving fairness and
transparency in LLMs.Summary
AI-Generated Summary