ChatPaper.aiChatPaper

对齐却刻板?系统提示对基于LVLM的文生图模型中社会偏见的隐性影响

Aligned but Stereotypical? The Hidden Influence of System Prompts on Social Bias in LVLM-Based Text-to-Image Models

December 4, 2025
作者: NaHyeon Park, Namin An, Kunhee Kim, Soyeon Yoon, Jiahao Huo, Hyunjung Shim
cs.AI

摘要

基於大型視覺語言模型(LVLM)的文生圖(T2I)系統已成為圖像生成的主流範式,但其是否會放大社會偏見仍缺乏深入研究。本文揭示,基於LVLM的模型比非LVLM模型產生的圖像存在更顯著的社會偏見。我們構建了一個包含1024個提示詞的基準測試集,涵蓋四級語言複雜度,並系統性評估了多種屬性的群體偏見。分析表明,系統提示詞(即指導LVLM的預定義指令)是產生偏見行為的主要驅動因素。通過解碼中間表徵、詞元概率診斷和嵌入關聯分析,我們揭示了系統提示詞如何編碼人口統計學先驗信息並將其傳播至圖像合成過程。為此,我們提出FairPro——一種免訓練的元提示框架,使LVLM能夠在測試時進行自我審查並構建公平感知的系統提示詞。在SANA和Qwen-Image兩種LVLM文生圖模型上的實驗表明,FairPro在保持文圖一致性的同時顯著降低了群體偏見。我們的研究成果不僅揭示了系統提示詞在偏見傳播中的核心作用,更為構建更具社會責任感的文生圖系統提供了可實際部署的解決方案。
English
Large vision-language model (LVLM) based text-to-image (T2I) systems have become the dominant paradigm in image generation, yet whether they amplify social biases remains insufficiently understood. In this paper, we show that LVLM-based models produce markedly more socially biased images than non-LVLM-based models. We introduce a 1,024 prompt benchmark spanning four levels of linguistic complexity and evaluate demographic bias across multiple attributes in a systematic manner. Our analysis identifies system prompts, the predefined instructions guiding LVLMs, as a primary driver of biased behavior. Through decoded intermediate representations, token-probability diagnostics, and embedding-association analyses, we reveal how system prompts encode demographic priors that propagate into image synthesis. To this end, we propose FairPro, a training-free meta-prompting framework that enables LVLMs to self-audit and construct fairness-aware system prompts at test time. Experiments on two LVLM-based T2I models, SANA and Qwen-Image, show that FairPro substantially reduces demographic bias while preserving text-image alignment. We believe our findings provide deeper insight into the central role of system prompts in bias propagation and offer a practical, deployable approach for building more socially responsible T2I systems.
PDF61December 6, 2025