ChatPaper.aiChatPaper

超越詞元量化大型語言模型公平性:語義與統計的雙重視角

Quantifying Fairness in LLMs Beyond Tokens: A Semantic and Statistical Perspective

June 23, 2025
作者: Weijie Xu, Yiwen Wang, Chi Xue, Xiangkun Hu, Xi Fang, Guimin Dong, Chandan K. Reddy
cs.AI

摘要

大型語言模型(LLMs)生成的回應常帶有固有偏見,這削弱了其在實際應用中的可靠性。現有評估方法往往忽略長篇回應中的偏見問題以及LLM輸出的內在變異性。為解決這些挑戰,我們提出FiSCo(細粒度語義計算框架),這是一種新穎的統計框架,通過檢測不同人口統計群體間長篇回應的細微語義差異來評估LLMs的群體層面公平性。有別於先前專注於情感分析或詞元級比對的研究,FiSCo突破表層分析框架,在主張層面進行操作,利用蘊涵檢驗來評估跨回應的語義一致性。我們將模型輸出解構為語義獨立的主張,並應用統計假設檢定來比較群體間與群體內相似度,從而實現對細微偏見的穩健檢測。我們形式化定義了新的群體反事實公平性標準,並在涵蓋性別、種族和年齡的合成數據集與人工標註數據集上驗證了FiSCo的有效性。實驗表明,FiSCo能更可靠地識別微妙偏見,同時降低隨機性LLM變異的影響,其表現優於多種現有評估指標。
English
Large Language Models (LLMs) often generate responses with inherent biases, undermining their reliability in real-world applications. Existing evaluation methods often overlook biases in long-form responses and the intrinsic variability of LLM outputs. To address these challenges, we propose FiSCo(Fine-grained Semantic Computation), a novel statistical framework to evaluate group-level fairness in LLMs by detecting subtle semantic differences in long-form responses across demographic groups. Unlike prior work focusing on sentiment or token-level comparisons, FiSCo goes beyond surface-level analysis by operating at the claim level, leveraging entailment checks to assess the consistency of meaning across responses. We decompose model outputs into semantically distinct claims and apply statistical hypothesis testing to compare inter- and intra-group similarities, enabling robust detection of subtle biases. We formalize a new group counterfactual fairness definition and validate FiSCo on both synthetic and human-annotated datasets spanning gender, race, and age. Experiments show that FiSco more reliably identifies nuanced biases while reducing the impact of stochastic LLM variability, outperforming various evaluation metrics.
PDF41December 22, 2025