并非所有层都生而平等:面向个性化图像生成的自适应LoRA秩优化
Not All Layers Are Created Equal: Adaptive LoRA Ranks for Personalized Image Generation
March 23, 2026
作者: Donald Shenaj, Federico Errica, Antonio Carta
cs.AI
摘要
低秩自适应(LoRA)是目前基于预训练扩散模型生成个性化图像的事实微调策略。选择合适的秩至关重要,因为它需要在性能与内存消耗之间取得平衡,但当前业界通常仅根据社区共识确定秩值,而忽略了个性化主题的复杂性。其根源显而易见:为每个LoRA组件选择最佳秩的计算成本呈组合级增长,因此我们往往采用固定所有组件秩值的实用捷径。本文首次尝试突破这一困境。受神经网络自适应宽度学习的变分方法启发,我们允许各层秩值在针对特定主题微调时自由适配。通过建立秩位置的重要性排序机制,我们有效促使系统仅在严格必要的情况下生成更高秩值。定性与定量实验表明,我们的方法LoRA^2在29个测试主题上实现了DINO、CLIP-I和CLIP-T指标的竞争性平衡,同时相比高秩版本显著降低了内存需求和秩值。代码地址:https://github.com/donaldssh/NotAllLayersAreCreatedEqual。
English
Low Rank Adaptation (LoRA) is the de facto fine-tuning strategy to generate personalized images from pre-trained diffusion models. Choosing a good rank is extremely critical, since it trades off performance and memory consumption, but today the decision is often left to the community's consensus, regardless of the personalized subject's complexity. The reason is evident: the cost of selecting a good rank for each LoRA component is combinatorial, so we opt for practical shortcuts such as fixing the same rank for all components. In this paper, we take a first step to overcome this challenge. Inspired by variational methods that learn an adaptive width of neural networks, we let the ranks of each layer freely adapt during fine-tuning on a subject. We achieve it by imposing an ordering of importance on the rank's positions, effectively encouraging the creation of higher ranks when strictly needed. Qualitatively and quantitatively, our approach, LoRA^2, achieves a competitive trade-off between DINO, CLIP-I, and CLIP-T across 29 subjects while requiring much less memory and lower rank than high rank LoRA versions. Code: https://github.com/donaldssh/NotAllLayersAreCreatedEqual.