ChatPaper.aiChatPaper

NeuroAda:激活每个神经元的潜力,实现参数高效微调

NeuroAda: Activating Each Neuron's Potential for Parameter-Efficient Fine-Tuning

October 21, 2025
作者: Zhi Zhang, Yixian Shen, Congfeng Cao, Ekaterina Shutova
cs.AI

摘要

现有的参数高效微调(PEFT)方法主要分为两类:基于添加的方法和选择性原位适应方法。前者,如LoRA,通过引入额外模块使模型适应下游任务,具有较高的内存效率。然而,其表征能力往往受限,不太适合细粒度适应。相比之下,后者直接微调原模型参数中精心挑选的子集,允许更精确有效的适应,但代价是显著增加的内存消耗。为了调和这一权衡,我们提出了NeuroAda,一种新颖的PEFT方法,它能够在保持高内存效率的同时实现细粒度模型微调。我们的方法首先如选择性适应那样识别重要参数(即网络内的连接),然后为这些选定的参数引入旁路连接。在微调过程中,仅更新旁路连接,而保持原模型参数冻结。在涵盖自然语言生成与理解的23+任务上的实证结果表明,NeuroAda以仅leq 0.02%的可训练参数实现了最先进的性能,同时将CUDA内存使用量减少了高达60%。我们在此发布代码:https://github.com/FightingFighting/NeuroAda.git。
English
Existing parameter-efficient fine-tuning (PEFT) methods primarily fall into two categories: addition-based and selective in-situ adaptation. The former, such as LoRA, introduce additional modules to adapt the model to downstream tasks, offering strong memory efficiency. However, their representational capacity is often limited, making them less suitable for fine-grained adaptation. In contrast, the latter directly fine-tunes a carefully chosen subset of the original model parameters, allowing for more precise and effective adaptation, but at the cost of significantly increased memory consumption. To reconcile this trade-off, we propose NeuroAda, a novel PEFT method that enables fine-grained model finetuning while maintaining high memory efficiency. Our approach first identifies important parameters (i.e., connections within the network) as in selective adaptation, and then introduces bypass connections for these selected parameters. During finetuning, only the bypass connections are updated, leaving the original model parameters frozen. Empirical results on 23+ tasks spanning both natural language generation and understanding demonstrate that NeuroAda achieves state-of-the-art performance with as little as leq 0.02% trainable parameters, while reducing CUDA memory usage by up to 60%. We release our code here: https://github.com/FightingFighting/NeuroAda.git.
PDF31October 23, 2025