知非真解:基于认知与行为洞察重筑生成式主动性的根基
Knowing Isn't Understanding: Re-grounding Generative Proactivity with Epistemic and Behavioral Insight
February 16, 2026
作者: Kirandeep Kaur, Xingda Lyu, Chirag Shah
cs.AI
摘要
生成式AI智能体将理解等同于解决显性查询,这种假设将交互局限在用户能够明确表述的范围内。当用户自身尚未意识到缺失、风险或值得考量的因素时,此种假设便会失效。在此类情境下,主动性不仅是效率的提升手段,更是一种认知层面的必然需求。我们将这种状态称为"认知不完整性"——即有效协作的进展取决于对未知未知领域的探索。现有主动性方法仍局限于狭隘的预测性框架,仅从历史行为进行推断并预设目标已明确定义,因而无法为用户提供实质性支持。然而,超越用户当前认知边界呈现可能性本身并非必然有益。无约束的主动干预可能误导注意力、造成信息过载甚至带来危害。因此,主动型智能体需要行为根基:关于干预时机、方式及程度的原则性约束。我们主张生成式主动性必须同时植根于认知基础与行为规范。借鉴无知哲学与主动性行为研究,我们认为这些理论为设计能够负责任地参与互动、培育有意义协作关系的智能体提供了关键指导。
English
Generative AI agents equate understanding with resolving explicit queries, an assumption that confines interaction to what users can articulate. This assumption breaks down when users themselves lack awareness of what is missing, risky, or worth considering. In such conditions, proactivity is not merely an efficiency enhancement, but an epistemic necessity. We refer to this condition as epistemic incompleteness: where progress depends on engaging with unknown unknowns for effective partnership. Existing approaches to proactivity remain narrowly anticipatory, extrapolating from past behavior and presuming that goals are already well defined, thereby failing to support users meaningfully. However, surfacing possibilities beyond a user's current awareness is not inherently beneficial. Unconstrained proactive interventions can misdirect attention, overwhelm users, or introduce harm. Proactive agents, therefore, require behavioral grounding: principled constraints on when, how, and to what extent an agent should intervene. We advance the position that generative proactivity must be grounded both epistemically and behaviorally. Drawing on the philosophy of ignorance and research on proactive behavior, we argue that these theories offer critical guidance for designing agents that can engage responsibly and foster meaningful partnerships.