知非解:以認知與行為洞見重構生成式主動性
Knowing Isn't Understanding: Re-grounding Generative Proactivity with Epistemic and Behavioral Insight
February 16, 2026
作者: Kirandeep Kaur, Xingda Lyu, Chirag Shah
cs.AI
摘要
生成式AI代理將理解等同於解決明確查詢,這種假設將互動侷限於用戶能夠表述的範疇。當用戶自身尚未意識到缺失、風險或值得考量的因素時,此假設便會失效。在此情境下,主動性不僅是效率提升的手段,更成為認知層面的必要條件。我們將此狀態稱為「認知不完整性」:即有效協作需透過探索未知的未知領域方能推進。現有主動性方法仍侷限於狹隘的預測模式,僅從過往行為推斷並假設目標已明確定義,因而無法實質支持用戶。然而,超越用戶當前認知範圍的可能性揭示本身並非必然有益。無節制的主動干預可能誤導注意力、造成用戶負荷或引發危害。因此,主動型代理需具備「行為錨定」原則:即何時、如何及何種程度進行干預的規範性約束。我們主張生成式主動性必須同時扎根於認知與行為層面。借鑑無知哲學與主動行為研究,我們論證這些理論能為設計負責任且促進有意義協作關係的代理系統提供關鍵指引。
English
Generative AI agents equate understanding with resolving explicit queries, an assumption that confines interaction to what users can articulate. This assumption breaks down when users themselves lack awareness of what is missing, risky, or worth considering. In such conditions, proactivity is not merely an efficiency enhancement, but an epistemic necessity. We refer to this condition as epistemic incompleteness: where progress depends on engaging with unknown unknowns for effective partnership. Existing approaches to proactivity remain narrowly anticipatory, extrapolating from past behavior and presuming that goals are already well defined, thereby failing to support users meaningfully. However, surfacing possibilities beyond a user's current awareness is not inherently beneficial. Unconstrained proactive interventions can misdirect attention, overwhelm users, or introduce harm. Proactive agents, therefore, require behavioral grounding: principled constraints on when, how, and to what extent an agent should intervene. We advance the position that generative proactivity must be grounded both epistemically and behaviorally. Drawing on the philosophy of ignorance and research on proactive behavior, we argue that these theories offer critical guidance for designing agents that can engage responsibly and foster meaningful partnerships.