问询何人何事:基于多轮大语言模型交互的自适应群体意见征询
Whom to Query for What: Adaptive Group Elicitation via Multi-Turn LLM Interactions
February 15, 2026
作者: Ruomeng Ding, Tianwei Gao, Thomas P. Zollo, Eitan Bachmat, Richard Zemel, Zhun Deng
cs.AI
摘要
在调查和集体评估中,为降低潜在群体属性不确定性而进行信息采集时,需在现实成本约束和数据缺失条件下分配有限的提问资源。尽管大语言模型支持自然语言的自适应多轮交互,现有多数启发方法仅针对固定受访群体优化提问策略,未能在响应不完整时动态调整受访者选择或利用群体结构特征。为此,我们研究自适应群体启发机制——一种在明确查询与参与预算下,智能体自适应选择问题与受访者的多轮交互框架。我们提出理论支撑的双模块框架:结合(i)基于LLM的预期信息增益目标对候选问题评分,与(ii)异质图神经网络传播机制,通过聚合已观测响应和参与者属性来补全缺失响应并指导每轮受访者选择。该闭环流程通过结构化相似性推断群体层面响应,同时仅查询少量信息量最大的个体。在三个真实世界观点数据集上的实验表明,我们的方法在受限预算下持续提升群体响应预测精度,其中在CES数据集上以10%的受访者预算实现超过12%的相对性能提升。
English
Eliciting information to reduce uncertainty about latent group-level properties from surveys and other collective assessments requires allocating limited questioning effort under real costs and missing data. Although large language models enable adaptive, multi-turn interactions in natural language, most existing elicitation methods optimize what to ask with a fixed respondent pool, and do not adapt respondent selection or leverage population structure when responses are partial or incomplete. To address this gap, we study adaptive group elicitation, a multi-round setting where an agent adaptively selects both questions and respondents under explicit query and participation budgets. We propose a theoretically grounded framework that combines (i) an LLM-based expected information gain objective for scoring candidate questions with (ii) heterogeneous graph neural network propagation that aggregates observed responses and participant attributes to impute missing responses and guide per-round respondent selection. This closed-loop procedure queries a small, informative subset of individuals while inferring population-level responses via structured similarity. Across three real-world opinion datasets, our method consistently improves population-level response prediction under constrained budgets, including a >12% relative gain on CES at a 10% respondent budget.