ChatPaper.aiChatPaper

可信度导向的上下文工程:混合与不恰当情境下的Rescorla-Wagner模型调控

Context Engineering for Trustworthiness: Rescorla Wagner Steering Under Mixed and Inappropriate Contexts

September 2, 2025
作者: Rushi Wang, Jiateng Liu, Cheng Qian, Yifan Shen, Yanzhou Pan, Zhaozhuo Xu, Ahmed Abbasi, Heng Ji, Denghui Zhang
cs.AI

摘要

融入外部语境能显著提升大语言模型(LLMs)的响应质量。然而,现实世界的语境往往混杂着相关信息与不成比例的不当内容,带来可靠性风险。LLMs如何处理并优先考虑混合语境?为探究此问题,我们引入了“污染语境测试平台”,将查询与包含相关及不当内容的真实语境配对。受动物联想学习启发,我们借鉴神经科学中的Rescorla-Wagner(RW)模型,量化竞争性语境信号如何影响LLM输出。经调整后的模型揭示了一致的行为模式:LLMs展现出强烈倾向,倾向于采纳语境中较少出现的信息。这种易感性在现实场景中尤为有害,少量不当内容即可大幅降低响应质量。我们的测试平台实证评估进一步证实了这一脆弱性。为应对此问题,我们提出了RW-Steering,一种基于两阶段微调的方法,使模型能在内部识别并忽略不当信号。与以往依赖广泛监督、针对多样语境混合的方法不同,RW-Steering能在不同比例的不当内容中稳健泛化。实验表明,我们最佳微调模型提升了39.8%的响应质量,并逆转了不良行为曲线,确立了RW-Steering作为提升LLM在现实使用中安全性的稳健、可泛化语境工程解决方案的地位。
English
Incorporating external context can significantly enhance the response quality of Large Language Models (LLMs). However, real-world contexts often mix relevant information with disproportionate inappropriate content, posing reliability risks. How do LLMs process and prioritize mixed context? To study this, we introduce the Poisoned Context Testbed, pairing queries with real-world contexts containing relevant and inappropriate content. Inspired by associative learning in animals, we adapt the Rescorla-Wagner (RW) model from neuroscience to quantify how competing contextual signals influence LLM outputs. Our adapted model reveals a consistent behavioral pattern: LLMs exhibit a strong tendency to incorporate information that is less prevalent in the context. This susceptibility is harmful in real-world settings, where small amounts of inappropriate content can substantially degrade response quality. Empirical evaluations on our testbed further confirm this vulnerability. To tackle this, we introduce RW-Steering, a two-stage finetuning-based approach that enables the model to internally identify and ignore inappropriate signals. Unlike prior methods that rely on extensive supervision across diverse context mixtures, RW-Steering generalizes robustly across varying proportions of inappropriate content. Experiments show that our best fine-tuned model improves response quality by 39.8% and reverses the undesirable behavior curve, establishing RW-Steering as a robust, generalizable context engineering solution for improving LLM safety in real-world use.
PDF43January 19, 2026