CorrSteer:通过基于相关性的稀疏自编码器特征选择,提升大语言模型的任务性能与安全性
CorrSteer: Steering Improves Task Performance and Safety in LLMs through Correlation-based Sparse Autoencoder Feature Selection
August 18, 2025
作者: Seonglae Cho, Zekun Wu, Adriano Koshiyama
cs.AI
摘要
稀疏自编码器(SAEs)能够无监督地从大型语言模型(LLMs)中提取可解释特征。然而,其在后续控制任务中的有效性受到对比数据集或大规模激活存储需求的限制。为解决这些局限,我们提出了CorrSteer,该方法通过在推理时关联样本正确性与SAE生成标记的激活来选择特征。此方法仅利用推理时的激活来提取更相关的特征,从而避免虚假关联。同时,它从平均激活中获取控制系数,实现了整个流程的自动化。我们的方法在Gemma 2 2B和LLaMA 3.1 8B上的问答、偏见缓解、防越狱及推理基准测试中展现了任务性能的提升,特别是在仅使用4000个样本的情况下,MMLU性能提升了+4.1%,HarmBench提升了+22.9%。所选特征展示了与各任务需求语义一致的有意义模式,揭示了驱动性能的潜在能力。我们的工作确立了基于相关性的选择作为一种有效且可扩展的方法,适用于跨语言模型应用的自动化SAE控制。
English
Sparse Autoencoders (SAEs) can extract interpretable features from large
language models (LLMs) without supervision. However, their effectiveness in
downstream steering tasks is limited by the requirement for contrastive
datasets or large activation storage. To address these limitations, we propose
CorrSteer, which selects features by correlating sample correctness with SAE
activations from generated tokens at inference time. This approach uses only
inference-time activations to extract more relevant features, thereby avoiding
spurious correlations. It also obtains steering coefficients from average
activations, automating the entire pipeline. Our method shows improved task
performance on QA, bias mitigation, jailbreaking prevention, and reasoning
benchmarks on Gemma 2 2B and LLaMA 3.1 8B, notably achieving a +4.1%
improvement in MMLU performance and a +22.9% improvement in HarmBench with only
4000 samples. Selected features demonstrate semantically meaningful patterns
aligned with each task's requirements, revealing the underlying capabilities
that drive performance. Our work establishes correlationbased selection as an
effective and scalable approach for automated SAE steering across language
model applications.