ChatPaper.aiChatPaper

当推理模型损害行为模拟:多智能体LLM协商中的求解器-采样器错配问题

When Reasoning Models Hurt Behavioral Simulation: A Solver-Sampler Mismatch in Multi-Agent LLM Negotiation

April 12, 2026
作者: Sandro Andric
cs.AI

摘要

大型语言模型正日益作为智能体应用于社会、经济及政策模拟中。一个普遍假设认为,增强推理能力应能提高模拟的真实性。我们指出,当模拟目标并非解决战略问题而是采样合理的有限理性行为时,这一假设可能失效。在此类情境下,增强推理的模型可能成为更优的求解器却成为更差的模拟器:它们可能过度优化策略主导行为,瓦解以妥协为导向的终局行为,有时甚至呈现"无保真度的多样性"模式——即局部行为差异得以保留却缺乏结果层面的真实性。我们在三个改编自早期模拟研究的多智能体协商环境中验证这种求解器-采样器错配现象:模糊权限分散的贸易限额场景、模糊统一对立的贸易限额场景,以及应急电力管理中新领域的电网限电案例。我们比较了无反思、有限反思和原生推理三种条件,涵盖两大主流模型系列,并将相同实验方案扩展至OpenAI的GPT-4.1与GPT-5.2直接测试。所有三项实验中,有限反思条件产生的行为轨迹在多样性和妥协导向性上显著优于无反思与原生推理。在OpenAI扩展实验中,GPT-5.2原生推理在三项实验的45次运行中全部以权威决策告终,而GPT-5.2有限反思则在每个环境中均重现了妥协结果。本文贡献并非全盘否定推理能力的作用,而是提出方法论警示:模型能力与模拟真实性属于不同目标,行为模拟应视模型为采样器而非单纯求解器。
English
Large language models are increasingly used as agents in social, economic, and policy simulations. A common assumption is that stronger reasoning should improve simulation fidelity. We argue that this assumption can fail when the objective is not to solve a strategic problem, but to sample plausible boundedly rational behavior. In such settings, reasoning-enhanced models can become better solvers and worse simulators: they can over-optimize for strategically dominant actions, collapse compromise-oriented terminal behavior, and sometimes exhibit a diversity-without-fidelity pattern in which local variation survives without outcome-level fidelity. We study this solver-sampler mismatch in three multi-agent negotiation environments adapted from earlier simulation work: an ambiguous fragmented-authority trading-limits scenario, an ambiguous unified-opposition trading-limits scenario, and a new-domain grid-curtailment case in emergency electricity management. We compare three reflection conditions, no reflection, bounded reflection, and native reasoning, across two primary model families and then extend the same protocol to direct OpenAI runs with GPT-4.1 and GPT-5.2. Across all three experiments, bounded reflection produces substantially more diverse and compromise-oriented trajectories than either no reflection or native reasoning. In the direct OpenAI extension, GPT-5.2 native ends in authority decisions in 45 of 45 runs across the three experiments, while GPT-5.2 bounded recovers compromise outcomes in every environment. The contribution is not a claim that reasoning is generally harmful. It is a methodological warning: model capability and simulation fidelity are different objectives, and behavioral simulation should qualify models as samplers, not only as solvers.
PDF11April 16, 2026