ChatPaper.aiChatPaper

ACE:面向多跳事实回忆的属性控制知识编辑

ACE: Attribution-Controlled Knowledge Editing for Multi-hop Factual Recall

October 9, 2025
作者: Jiayu Yang, Yuxuan Fan, Songning Lai, Shengen Wu, Jiaqi Tang, Chun Kang, Zhijiang Guo, Yutao Yue
cs.AI

摘要

大型语言模型(LLMs)需要高效的知识编辑(KE)来更新事实信息,然而现有方法在多跳事实回忆中表现出显著的性能衰减。当编辑涉及推理链中的中间隐含主体时,这一失败尤为突出。通过因果分析,我们发现这一局限源于对链式知识在神经元层面如何动态表示和利用的忽视。我们发现,在多跳推理过程中,隐含主体作为查询神经元,依次激活跨变压器层的相应值神经元,以向最终答案积累信息,这一动态过程是先前KE工作所忽略的。基于这一洞见,我们提出了ACE:面向多跳事实回忆的属性控制知识编辑框架,该框架利用神经元级属性识别并编辑这些关键的查询-值(Q-V)路径。ACE为多跳KE提供了一个机制上扎实的解决方案,在GPT-J和Qwen3-8B上分别以9.44%和37.46%的优势实证超越了现有最先进方法。我们的分析进一步揭示了Qwen3中更为精细的激活模式,并证明了值神经元的语义可解释性是由查询驱动的积累所协调的。这些发现基于对内部推理机制的原则性理解,为推进KE能力开辟了一条新路径。
English
Large Language Models (LLMs) require efficient knowledge editing (KE) to update factual information, yet existing methods exhibit significant performance decay in multi-hop factual recall. This failure is particularly acute when edits involve intermediate implicit subjects within reasoning chains. Through causal analysis, we reveal that this limitation stems from an oversight of how chained knowledge is dynamically represented and utilized at the neuron level. We discover that during multi hop reasoning, implicit subjects function as query neurons, which sequentially activate corresponding value neurons across transformer layers to accumulate information toward the final answer, a dynamic prior KE work has overlooked. Guided by this insight, we propose ACE: Attribution-Controlled Knowledge Editing for Multi-hop Factual Recall, a framework that leverages neuron-level attribution to identify and edit these critical query-value (Q-V) pathways. ACE provides a mechanistically grounded solution for multi-hop KE, empirically outperforming state-of-the-art methods by 9.44% on GPT-J and 37.46% on Qwen3-8B. Our analysis further reveals more fine-grained activation patterns in Qwen3 and demonstrates that the semantic interpretability of value neurons is orchestrated by query-driven accumulation. These findings establish a new pathway for advancing KE capabilities based on the principled understanding of internal reasoning mechanisms.
PDF12October 13, 2025