記憶遷移學習:編碼智慧體中跨領域記憶的遷移機制
Memory Transfer Learning: How Memories are Transferred Across Domains in Coding Agents
April 15, 2026
作者: Kangsan Kim, Minki Kang, Taeil Kim, Yanlai Yang, Mengye Ren, Sung Ju Hwang
cs.AI
摘要
基於記憶的自我演化已成為程式設計代理程式的一項重要範式。然而,現有方法通常將記憶運用限制於同質任務領域,未能充分利用現實世界中各類程式設計問題間存在的共享基礎設施(如執行環境與程式語言)。為突破此限制,我們透過整合異質領域的統一記憶池,探索記憶遷移學習(MTL)機制。我們使用四種從具體執行軌跡到抽象洞見的記憶表徵,在六項程式設計基準測試中進行評估。實驗結果表明,跨領域記憶可提升平均效能達3.7%,其增益主要來自驗證流程等後設知識的遷移,而非特定任務的程式碼。關鍵在於,我們發現抽象程度決定可遷移性:高層次洞見具備良好泛化能力,而低層次軌跡因過度具體化常導致負遷移。此外,我們證實遷移效果隨記憶池規模擴展而增強,且記憶可在不同模型間實現遷移。本研究確立了將記憶運用擴展至單領域孤島之外的實證設計原則。專案頁面:https://memorytransfer.github.io/
English
Memory-based self-evolution has emerged as a promising paradigm for coding agents. However, existing approaches typically restrict memory utilization to homogeneous task domains, failing to leverage the shared infrastructural foundations, such as runtime environments and programming languages, that exist across diverse real-world coding problems. To address this limitation, we investigate Memory Transfer Learning (MTL) by harnessing a unified memory pool from heterogeneous domains. We evaluate performance across 6 coding benchmarks using four memory representations, ranging from concrete traces to abstract insights. Our experiments demonstrate that cross-domain memory improves average performance by 3.7\%, primarily by transferring meta-knowledge, such as validation routines, rather than task-specific code. Importantly, we find that abstraction dictates transferability; high-level insights generalize well, whereas low-level traces often induce negative transfer due to excessive specificity. Furthermore, we show that transfer effectiveness scales with the size of the memory pool, and memory can be transferred even between different models. Our work establishes empirical design principles for expanding memory utilization beyond single-domain silos. Project page: https://memorytransfer.github.io/