ChatPaper.aiChatPaper

公平性能否被提示?高风险推荐中的基于提示的去偏策略

Can Fairness Be Prompted? Prompt-Based Debiasing Strategies in High-Stakes Recommendations

March 13, 2026
作者: Mihaela Rotar, Theresia Veronika Rampisela, Maria Maistro
cs.AI

摘要

大型语言模型能够通过姓名、代词等间接线索推断性别或年龄等敏感属性,这可能导致推荐结果产生偏差。尽管现有多种去偏方法,但它们需要访问模型权重、计算成本高昂,且普通用户难以操作。为填补这一空白,我们研究了LLM推荐系统中的隐性偏见,并探索基于提示的策略能否成为轻量级且易用的去偏途径。我们提出了三种针对LLM推荐系统的偏见感知提示策略。据我们所知,这是首个专注于用户群体公平性的LLM推荐系统提示去偏研究。通过使用3种LLM、4类提示模板、9种敏感属性值及2个数据集进行实验,我们发现:指示LLM保持公平性的去偏方法最高可提升74%的公平性,且能保持相当的推荐效果,但在某些情况下可能过度提升特定人口群体的曝光度。
English
Large Language Models (LLMs) can infer sensitive attributes such as gender or age from indirect cues like names and pronouns, potentially biasing recommendations. While several debiasing methods exist, they require access to the LLMs' weights, are computationally costly, and cannot be used by lay users. To address this gap, we investigate implicit biases in LLM Recommenders (LLMRecs) and explore whether prompt-based strategies can serve as a lightweight and easy-to-use debiasing approach. We contribute three bias-aware prompting strategies for LLMRecs. To our knowledge, this is the first study on prompt-based debiasing approaches in LLMRecs that focuses on group fairness for users. Our experiments with 3 LLMs, 4 prompt templates, 9 sensitive attribute values, and 2 datasets show that our proposed debiasing approach, which instructs an LLM to be fair, can improve fairness by up to 74% while retaining comparable effectiveness, but might overpromote specific demographic groups in some cases.
PDF32March 30, 2026