ChatPaper.aiChatPaper

并非所有层都生而平等:个性化图像生成中的自适应LoRA秩优化

Not All Layers Are Created Equal: Adaptive LoRA Ranks for Personalized Image Generation

March 23, 2026
作者: Donald Shenaj, Federico Errica, Antonio Carta
cs.AI

摘要

低秩自适应(LoRA)作为从预训练扩散模型生成个性化图像的事实微调策略,其秩的选择至关重要——它直接权衡性能与内存消耗,但当前业界往往仅依据社群共识确定秩值,而未考虑个性化主题的复杂度。原因显而易见:为每个LoRA组件选择最佳秩的成本呈组合爆炸式增长,因此我们通常采用固定所有组件秩值的实用捷径。本文首次尝试突破这一困境。受神经网络自适应宽度学习的变分方法启发,我们让各层秩在针对特定主题的微调过程中自由适配。通过强制规定秩位置的重要性排序,该方法能有效促使模型仅在严格必要时生成更高秩值。定性与定量实验表明,我们的方法LoRA^2在29个主题上实现了DINO、CLIP-I和CLIP-T指标的均衡权衡,同时比高秩版本LoRA节省更多内存并降低秩值。代码地址:https://github.com/donaldssh/NotAllLayersAreEqual。
English
Low Rank Adaptation (LoRA) is the de facto fine-tuning strategy to generate personalized images from pre-trained diffusion models. Choosing a good rank is extremely critical, since it trades off performance and memory consumption, but today the decision is often left to the community's consensus, regardless of the personalized subject's complexity. The reason is evident: the cost of selecting a good rank for each LoRA component is combinatorial, so we opt for practical shortcuts such as fixing the same rank for all components. In this paper, we take a first step to overcome this challenge. Inspired by variational methods that learn an adaptive width of neural networks, we let the ranks of each layer freely adapt during fine-tuning on a subject. We achieve it by imposing an ordering of importance on the rank's positions, effectively encouraging the creation of higher ranks when strictly needed. Qualitatively and quantitatively, our approach, LoRA^2, achieves a competitive trade-off between DINO, CLIP-I, and CLIP-T across 29 subjects while requiring much less memory and lower rank than high rank LoRA versions. Code: https://github.com/donaldssh/NotAllLayersAreCreatedEqual.
PDF22March 25, 2026