LLM智能体的外部化机制:记忆、技能、协议与框架工程的统一述评
Externalization in LLM Agents: A Unified Review of Memory, Skills, Protocols and Harness Engineering
April 9, 2026
作者: Chenyu Zhou, Huacan Chai, Wenteng Chen, Zihan Guo, Rong Shan, Yuanyi Song, Tianyi Xu, Yingxuan Yang, Aofan Yu, Weiming Zhang, Congming Zheng, Jiachen Zhu, Zeyu Zheng, Zhuosheng Zhang, Xingyu Lou, Changwang Zhang, Zhihui Fu, Jun Wang, Weiwen Liu, Jianghao Lin, Weinan Zhang
cs.AI
摘要
大型语言模型(LLM)智能体的构建重点正逐渐从调整模型权重转向重组其运行时环境。早期系统期望模型内部实现的能力,如今被外化为记忆存储、可复用技能、交互协议以及确保这些模块可靠运行的支撑框架。本文以"外部化"为视角审视这一转变。借鉴认知人工物的概念,我们认为智能体基础设施的重要性不仅在于添加辅助组件,更在于其将高难度认知负荷转化为模型可更可靠解决的形式。在此视角下,记忆实现了状态跨时间的外部化,技能实现了程序性知识的外部化,协议实现了交互结构的外部化,而框架工程则作为协调这些要素实现受控执行的统一层。我们追溯了从权重到上下文再到支撑框架的历史演进,将记忆、技能和协议分析为三种相互关联的外部化形式,并考察它们在智能体系统内部的交互机制。进一步探讨参数化能力与外部化能力之间的权衡,指出自我进化框架和共享智能体基础设施等新兴方向,并评估在评估体系、治理机制以及模型与外部基础设施长期协同演化等方面的开放挑战。最终提出一个系统级框架,用以阐释为何实用智能体的进展日益依赖于更强大的模型,更取决于更优质的外部认知基础设施。
English
Large language model (LLM) agents are increasingly built less by changing model weights than by reorganizing the runtime around them. Capabilities that earlier systems expected the model to recover internally are now externalized into memory stores, reusable skills, interaction protocols, and the surrounding harness that makes these modules reliable in practice. This paper reviews that shift through the lens of externalization. Drawing on the idea of cognitive artifacts, we argue that agent infrastructure matters not merely because it adds auxiliary components, but because it transforms hard cognitive burdens into forms that the model can solve more reliably. Under this view, memory externalizes state across time, skills externalize procedural expertise, protocols externalize interaction structure, and harness engineering serves as the unification layer that coordinates them into governed execution. We trace a historical progression from weights to context to harness, analyze memory, skills, and protocols as three distinct but coupled forms of externalization, and examine how they interact inside a larger agent system. We further discuss the trade-off between parametric and externalized capability, identify emerging directions such as self-evolving harnesses and shared agent infrastructure, and discuss open challenges in evaluation, governance, and the long-term co-evolution of models and external infrastructure. The result is a systems-level framework for explaining why practical agent progress increasingly depends not only on stronger models, but on better external cognitive infrastructure.