ChatPaper.aiChatPaper

O-Mem:面向個性化、長時程、自我演化智慧體的全面記憶系統

O-Mem: Omni Memory System for Personalized, Long Horizon, Self-Evolving Agents

November 17, 2025
作者: Piaohong Wang, Motong Tian, Jiaxian Li, Yuan Liang, Yuqing Wang, Qianben Chen, Tiannan Wang, Zhicong Lu, Jiawei Ma, Yuchen Eleanor Jiang, Wangchunshu Zhou
cs.AI

摘要

近期基於大型語言模型的智慧體技術展現出生成類人回應的顯著潛力,然而在複雜環境中維持長期互動仍面臨挑戰,主要癥結在於情境連貫性與動態個人化能力的限制。現有記憶系統多依賴檢索前的語義分群機制,此做法可能忽略語義無關但關鍵的用戶資訊,並引入檢索噪聲。本報告提出新型記憶框架O-Mem的初步設計,該框架基於主動用戶畫像技術,能從用戶與智慧體的主動互動中動態提取並更新用戶特徵與事件紀錄。O-Mem支援人物屬性與主題關聯情境的分層檢索,從而實現更具適應性與連貫性的個人化回應。在公開基準測試中,O-Mem於LoCoMo數據集達到51.67%的準確率,較先前最先進的LangMem提升近3%;在PERSONAMEM數據集達到62.99%的準確率,較先前最先進的A-Mmem提升3.5%。相較於既有記憶框架,O-Mem同時提升了令牌處理與互動回應的時效性。本研究為未來開發高效且類人的個人化人工智慧助理開闢了嶄新方向。
English
Recent advancements in LLM-powered agents have demonstrated significant potential in generating human-like responses; however, they continue to face challenges in maintaining long-term interactions within complex environments, primarily due to limitations in contextual consistency and dynamic personalization. Existing memory systems often depend on semantic grouping prior to retrieval, which can overlook semantically irrelevant yet critical user information and introduce retrieval noise. In this report, we propose the initial design of O-Mem, a novel memory framework based on active user profiling that dynamically extracts and updates user characteristics and event records from their proactive interactions with agents. O-Mem supports hierarchical retrieval of persona attributes and topic-related context, enabling more adaptive and coherent personalized responses. O-Mem achieves 51.67% on the public LoCoMo benchmark, a nearly 3% improvement upon LangMem,the previous state-of-the-art, and it achieves 62.99% on PERSONAMEM, a 3.5% improvement upon A-Mem,the previous state-of-the-art. O-Mem also boosts token and interaction response time efficiency compared to previous memory frameworks. Our work opens up promising directions for developing efficient and human-like personalized AI assistants in the future.
PDF232December 1, 2025