记忆迁移学习:编程智能体中跨领域记忆的传递机制
Memory Transfer Learning: How Memories are Transferred Across Domains in Coding Agents
April 15, 2026
作者: Kangsan Kim, Minki Kang, Taeil Kim, Yanlai Yang, Mengye Ren, Sung Ju Hwang
cs.AI
摘要
基于记忆的自我进化已成为编码智能体的一种前景广阔的研究范式。然而,现有方法通常将记忆应用限制在单一任务领域,未能充分利用现实编程问题中存在的共享基础设施(如运行时环境和编程语言)。为突破这一局限,我们通过构建跨领域统一记忆池来研究记忆迁移学习(MTL)。我们在6个编码基准测试中评估了四种记忆表征方式(从具体执行轨迹到抽象认知洞察)的性能。实验表明,跨领域记忆通过迁移元知识(如验证流程)而非具体任务代码,使平均性能提升3.7%。关键发现是:抽象程度决定可迁移性——高层次洞察具有良好泛化能力,而低层次轨迹因过度具体化常引发负迁移。此外,我们证实迁移效果随记忆池规模扩大而增强,且记忆可在不同模型间实现迁移。本研究为突破单领域记忆孤岛建立了实证设计原则。项目页面:https://memorytransfer.github.io/
English
Memory-based self-evolution has emerged as a promising paradigm for coding agents. However, existing approaches typically restrict memory utilization to homogeneous task domains, failing to leverage the shared infrastructural foundations, such as runtime environments and programming languages, that exist across diverse real-world coding problems. To address this limitation, we investigate Memory Transfer Learning (MTL) by harnessing a unified memory pool from heterogeneous domains. We evaluate performance across 6 coding benchmarks using four memory representations, ranging from concrete traces to abstract insights. Our experiments demonstrate that cross-domain memory improves average performance by 3.7\%, primarily by transferring meta-knowledge, such as validation routines, rather than task-specific code. Importantly, we find that abstraction dictates transferability; high-level insights generalize well, whereas low-level traces often induce negative transfer due to excessive specificity. Furthermore, we show that transfer effectiveness scales with the size of the memory pool, and memory can be transferred even between different models. Our work establishes empirical design principles for expanding memory utilization beyond single-domain silos. Project page: https://memorytransfer.github.io/