ComoRAG:一种受认知启发的记忆组织RAG,用于状态化长叙事推理
ComoRAG: A Cognitive-Inspired Memory-Organized RAG for Stateful Long Narrative Reasoning
August 14, 2025
作者: Juyuan Wang, Rongchen Zhao, Wei Wei, Yufeng Wang, Mo Yu, Jie Zhou, Jin Xu, Liyan Xu
cs.AI
摘要
长篇故事与小说的叙事理解一直是一个颇具挑战性的领域,这归因于其错综复杂的情节线以及角色与实体间交织且不断演变的关系。鉴于大型语言模型(LLM)在处理长上下文时的推理能力受限及高昂的计算成本,基于检索的方法在实践中仍占据核心地位。然而,传统的检索增强生成(RAG)方法因其无状态、单步检索的特性,往往难以捕捉长程上下文中相互关联关系的动态变化。在本研究中,我们提出了ComoRAG,其核心理念是:叙事推理并非一次性过程,而是新证据获取与过往知识巩固之间动态演进的交互,类似于人类大脑在处理记忆相关信号时的认知机制。具体而言,当遇到推理瓶颈时,ComoRAG会通过与动态记忆工作区的交互进行迭代推理循环。在每一轮循环中,它生成探测性查询以开辟新的探索路径,随后将检索到的新方面证据整合至全局记忆池中,从而为查询解决构建连贯的上下文背景。在四个具有挑战性的长上下文叙事基准测试(超过20万词)中,ComoRAG相较于最强的RAG基线模型,实现了高达11%的相对性能提升。深入分析表明,ComoRAG在处理需要全局理解的复杂查询时尤为有效,为基于检索的长上下文理解提供了一种原则性强、认知启发的状态推理范式。我们的代码已公开发布于https://github.com/EternityJune25/ComoRAG。
English
Narrative comprehension on long stories and novels has been a challenging
domain attributed to their intricate plotlines and entangled, often evolving
relations among characters and entities. Given the LLM's diminished reasoning
over extended context and high computational cost, retrieval-based approaches
remain a pivotal role in practice. However, traditional RAG methods can fall
short due to their stateless, single-step retrieval process, which often
overlooks the dynamic nature of capturing interconnected relations within
long-range context. In this work, we propose ComoRAG, holding the principle
that narrative reasoning is not a one-shot process, but a dynamic, evolving
interplay between new evidence acquisition and past knowledge consolidation,
analogous to human cognition when reasoning with memory-related signals in the
brain. Specifically, when encountering a reasoning impasse, ComoRAG undergoes
iterative reasoning cycles while interacting with a dynamic memory workspace.
In each cycle, it generates probing queries to devise new exploratory paths,
then integrates the retrieved evidence of new aspects into a global memory
pool, thereby supporting the emergence of a coherent context for the query
resolution. Across four challenging long-context narrative benchmarks (200K+
tokens), ComoRAG outperforms strong RAG baselines with consistent relative
gains up to 11% compared to the strongest baseline. Further analysis reveals
that ComoRAG is particularly advantageous for complex queries requiring global
comprehension, offering a principled, cognitively motivated paradigm for
retrieval-based long context comprehension towards stateful reasoning. Our code
is publicly released at https://github.com/EternityJune25/ComoRAG