面向社交媒體用戶的LLM模擬研究:條件化評論預測的操作效度驗證
Towards Simulating Social Media Users with LLMs: Evaluating the Operational Validity of Conditioned Comment Prediction
February 26, 2026
作者: Nils Schwager, Simon Münker, Alistair Plum, Achim Rettinger
cs.AI
摘要
大型语言模型从探索性工具转变为社会科学研究中主动的"硅基主体"的过程,尚缺乏操作有效性的广泛验证。本研究提出条件化评论预测任务,通过比较模型生成内容与真实数字痕迹,评估模型对给定刺激下用户评论行为的预测能力。该框架为当前LLM模拟社交媒体用户行为的能力提供了严谨的评估方法。我们在英语、德语和卢森堡语场景下评估了开源8B模型,通过系统比较提示策略与监督微调的影响,发现低资源环境中存在形式与内容的脱钩现象:虽然SFT能调整文本输出的表层结构,却会削弱语义基础。研究还表明,在微调条件下显式条件设置会变得冗余,因为模型能直接从行为历史中进行潜在推理。这些发现对当前"朴素提示"范式提出挑战,为高保真模拟提供了优先采用真实行为痕迹而非描述性人格的操作指南。
English
The transition of Large Language Models (LLMs) from exploratory tools to active "silicon subjects" in social science lacks extensive validation of operational validity. This study introduces Conditioned Comment Prediction (CCP), a task in which a model predicts how a user would comment on a given stimulus by comparing generated outputs with authentic digital traces. This framework enables a rigorous evaluation of current LLM capabilities with respect to the simulation of social media user behavior. We evaluated open-weight 8B models (Llama3.1, Qwen3, Ministral) in English, German, and Luxembourgish language scenarios. By systematically comparing prompting strategies (explicit vs. implicit) and the impact of Supervised Fine-Tuning (SFT), we identify a critical form vs. content decoupling in low-resource settings: while SFT aligns the surface structure of the text output (length and syntax), it degrades semantic grounding. Furthermore, we demonstrate that explicit conditioning (generated biographies) becomes redundant under fine-tuning, as models successfully perform latent inference directly from behavioral histories. Our findings challenge current "naive prompting" paradigms and offer operational guidelines prioritizing authentic behavioral traces over descriptive personas for high-fidelity simulation.