ChatPaper.aiChatPaper

从RAG到丰富参数:探究语言模型在事实查询中如何利用外部知识而非参数信息

From RAGs to rich parameters: Probing how language models utilize external knowledge over parametric information for factual queries

June 18, 2024
作者: Hitesh Wadhwa, Rahul Seetharaman, Somyaa Aggarwal, Reshmi Ghosh, Samyadeep Basu, Soundararajan Srinivasan, Wenlong Zhao, Shreyas Chaudhari, Ehsan Aghazadeh
cs.AI

摘要

检索增强生成(RAG)丰富了语言模型利用外部上下文推理的能力,以增强对给定用户提示的响应。这种方法因在搜索、问答和聊天机器人等各种语言模型应用中的实际应用而日益受到欢迎。然而,这种方法的确切工作方式并不清楚。本文从机械角度检验了RAG管道,以突出语言模型采取捷径的方式,并倾向于仅利用上下文信息来回答问题,而最小程度地依赖它们的参数化记忆。我们通过以下方式探究语言模型的这种机械行为:(i)因果中介分析表明,在回答问题时参数化记忆被最小程度利用;(ii)注意贡献和排除显示最后一个标记残余流不是从问题中的主题标记中获得丰富信息,而是从上下文中的其他信息标记中获得丰富信息。我们发现这种明显的捷径行为在LLaMa和Phi系列模型中都存在。
English
Retrieval Augmented Generation (RAG) enriches the ability of language models to reason using external context to augment responses for a given user prompt. This approach has risen in popularity due to practical applications in various applications of language models in search, question/answering, and chat-bots. However, the exact nature of how this approach works isn't clearly understood. In this paper, we mechanistically examine the RAG pipeline to highlight that language models take shortcut and have a strong bias towards utilizing only the context information to answer the question, while relying minimally on their parametric memory. We probe this mechanistic behavior in language models with: (i) Causal Mediation Analysis to show that the parametric memory is minimally utilized when answering a question and (ii) Attention Contributions and Knockouts to show that the last token residual stream do not get enriched from the subject token in the question, but gets enriched from other informative tokens in the context. We find this pronounced shortcut behaviour true across both LLaMa and Phi family of models.
PDF212December 4, 2024