利用自注意力机制实现大语言模型中的输入相关软提示
Leveraging Self-Attention for Input-Dependent Soft Prompting in LLMs
June 5, 2025
作者: Ananth Muppidi, Abhilash Nandy, Sambaran Bandyopadhyay
cs.AI
摘要
大型语言模型在特定领域任务中的表现往往需要进行微调,这一过程既计算成本高昂又技术难度较大。本文聚焦于采用软提示的参数高效微调方法,这是一种通过训练少量参数使预训练模型适应下游任务的前沿技术。我们提出了一种新颖的基于输入依赖的软提示技术,结合自注意力机制(ID-SPAM),该技术能够根据输入令牌生成软提示,并以不同重要性关注各个令牌。我们的方法简洁高效,保持了可训练参数的数量较少。通过多项任务对比,我们展示了所提方法相较于现有技术的优势,并验证了其在零样本领域迁移能力上的提升。
English
The performance of large language models in domain-specific tasks
necessitates fine-tuning, which is computationally expensive and technically
challenging. This paper focuses on parameter-efficient fine-tuning using soft
prompting, a promising approach that adapts pre-trained models to downstream
tasks by learning a small set of parameters. We propose a novel Input Dependent
Soft Prompting technique with a self-Attention Mechanism (ID-SPAM) that
generates soft prompts based on the input tokens and attends different tokens
with varying importance. Our method is simple and efficient, keeping the number
of trainable parameters small. We show the merits of the proposed approach
compared to state-of-the-art techniques on various tasks and show the improved
zero shot domain transfer capability.Summary
AI-Generated Summary