ChatPaper.aiChatPaper

光谱注意力引导的提示词高亮技术

Spectral Attention Steering for Prompt Highlighting

March 1, 2026
作者: Weixian Waylon Li, Yuchen Niu, Yongxin Yang, Keshuang Li, Tiejun Ma, Shay B. Cohen
cs.AI

摘要

注意力引导是一项控制模型聚焦的重要技术,能够实现提示强调等功能,使模型优先处理用户指定的文本。然而现有注意力引导方法需显式存储完整注意力矩阵,导致其无法兼容FlashAttention等内存优化方案。我们提出谱编辑键值放大技术(SEKA),这种无需训练的引导方法通过在注意力计算前直接编辑键值嵌入来解决该问题。SEKA利用谱分解将键值嵌入引导至潜在方向,从而放大特定标记的注意力分数。我们进一步扩展出自适应SEKA(AdaSEKA),该查询自适应变体采用无训练路由机制,基于提示的语义意图动态组合多个专家子空间。实验表明,两种方法在标准引导基准测试中均显著优于强基线模型,同时仅增加极低的延迟与内存开销,完全兼容优化后的注意力机制。
English
Attention steering is an important technique for controlling model focus, enabling capabilities such as prompt highlighting, where the model prioritises user-specified text. However, existing attention steering methods require explicit storage of the full attention matrix, making them incompatible with memory-efficient implementations like FlashAttention. We introduce Spectral Editing Key Amplification (SEKA), a training-free steering method that tackles this by directly editing key embeddings before attention computation. SEKA uses spectral decomposition to steer key embeddings towards latent directions that amplify attention scores for certain tokens. We extend this to Adaptive SEKA (AdaSEKA), a query-adaptive variant that uses a training-free routing mechanism to dynamically combine multiple expert subspaces based on the prompt's semantic intent. Our experiments show both methods significantly outperform strong baselines on standard steering benchmarks while adding much lower latency and memory overhead, in compatibility with optimised attention.
PDF41March 4, 2026