ChatPaper.aiChatPaper

超越辅助轮次:用户轮次生成作为语言模型交互意识的探针

Beyond the Assistant Turn: User Turn Generation as a Probe of Interaction Awareness in Language Models

April 3, 2026
作者: Sarath Shekkizhar, Romain Cosentino, Adam Earle
cs.AI

摘要

传统的LLM基准测试仅评估助手轮次:模型针对输入生成回复,验证器评分后分析即告结束。这种范式无法衡量LLM是否对助手回复后的对话发展具备认知能力。我们提出用户轮次生成作为探测这一空白的方法:给定包含用户查询和助手回复的对话上下文,让模型以用户角色生成内容。若模型参数编码了交互意识,所生成的用户轮次应能基于前述上下文作出接地气的后续反应。通过对11个开源权重LLM(Qwen3.5、GPT-OSS、GLM)和5个数据集(数学推理、指令遵循、对话)的实验表明,交互意识与任务准确性存在解耦现象。以Qwen3.5系列为例,GSM8K准确率从0.8B模型的41%提升至397B-A17B模型的96.8%,但确定性生成下的真实后续对话率仍接近零。相反,较高温度采样显示交互意识具有潜在性,后续对话率可达22%。受控扰动实验验证了该探测方法确实衡量了模型的真实属性,而对Qwen3.5-2B进行协作导向的后训练则显著提升了后续对话率。我们的研究证明,用户轮次生成捕捉到了LLM行为的新维度——交互意识,这一维度在当前仅关注助手表现的基准测试中尚未被探索且不可见。
English
Standard LLM benchmarks evaluate the assistant turn: the model generates a response to an input, a verifier scores correctness, and the analysis ends. This paradigm leaves unmeasured whether the LLM encodes any awareness of what follows the assistant response. We propose user-turn generation as a probe of this gap: given a conversation context of user query and assistant response, we let a model generate under the user role. If the model's weights encode interaction awareness, the generated user turn will be a grounded follow-up that reacts to the preceding context. Through experiments across 11 open-weight LLMs (Qwen3.5, gpt-oss, GLM) and 5 datasets (math reasoning, instruction following, conversation), we show that interaction awareness is decoupled from task accuracy. In particular, within the Qwen3.5 family, GSM8K accuracy scales from 41% (0.8B) to 96.8% (397B-A17B), yet genuine follow-up rates under deterministic generation remain near zero. In contrast, higher temperature sampling reveals interaction awareness is latent with follow up rates reaching 22%. Controlled perturbations validate that the proposed probe measures a real property of the model, and collaboration-oriented post-training on Qwen3.5-2B demonstrates an increase in follow-up rates. Our results show that user-turn generation captures a dimension of LLM behavior, interaction awareness, that is unexplored and invisible with current assistant-only benchmarks.
PDF12April 14, 2026