ChatPaper.aiChatPaper

主体性进化:大型语言模型的演进之路

Position: Agentic Evolution is the Path to Evolving LLMs

January 30, 2026
作者: Minhua Lin, Hanqing Lu, Zhan Shi, Bing He, Rui Mao, Zhiwei Zhang, Zongyu Wu, Xianfeng Tang, Hui Liu, Zhenwei Dai, Xiang Zhang, Suhang Wang, Benoit Dumoulin, Jian Pei
cs.AI

摘要

随着大语言模型从精心策划的训练集迈向开放式的真实世界环境,一个根本性局限逐渐显现:静态训练无法跟上持续变化的部署环境节奏。扩大训练时与推理时的计算规模虽能提升静态能力,却无法弥合训练与部署之间的鸿沟。我们认为,解决这一局限需要引入新的扩展维度——进化。现有的部署时适应方法,无论是参数微调还是启发式记忆积累,都缺乏诊断故障并实现持久改进的战略能动性。我们的观点是:具身化进化将成为大语言模型适应的必然方向,使进化本身从固定流程升格为自主进化智能体。基于这一构想,我们提出通用框架A-Evolve,将部署时的改进视为对系统持久状态进行的具有明确目标的优化过程。进一步地,我们提出"进化扩展假说":适应能力随进化分配的计算资源而扩展,使具身化进化成为实现现实世界中持续开放式适应的可扩展路径。
English
As Large Language Models (LLMs) move from curated training sets into open-ended real-world environments, a fundamental limitation emerges: static training cannot keep pace with continual deployment environment change. Scaling training-time and inference-time compute improves static capability but does not close this train-deploy gap. We argue that addressing this limitation requires a new scaling axis-evolution. Existing deployment-time adaptation methods, whether parametric fine-tuning or heuristic memory accumulation, lack the strategic agency needed to diagnose failures and produce durable improvements. Our position is that agentic evolution represents the inevitable future of LLM adaptation, elevating evolution itself from a fixed pipeline to an autonomous evolver agent. We instantiate this vision in a general framework, A-Evolve, which treats deployment-time improvement as a deliberate, goal-directed optimization process over persistent system state. We further propose the evolution-scaling hypothesis: the capacity for adaptation scales with the compute allocated to evolution, positioning agentic evolution as a scalable path toward sustained, open-ended adaptation in the real world.
PDF53February 8, 2026