ChatPaper.aiChatPaper

AdaPreLoRA: Adafactor預條件低秩適應

AdaPreLoRA: Adafactor Preconditioned Low-Rank Adaptation

May 9, 2026
作者: Ziyun Liu, Fengmiao Bian, Jian-Feng Cai
cs.AI

摘要

低秩自适应(LoRA)通过将权重更新重参数化为两个低秩因子的乘积,但生成器将因子映射到权重矩阵的雅可比矩阵 J_{G} 存在秩不足,因此任何 {W} 空间预条件子 {F}_t 所诱导的因子空间预条件子 J_{G}^* {F}_t J_{G} 是奇异的,导致标准链式法则无法唯一地逆映射回因子空间更新。本文在一个由两个选择参数化的统一框架中审视现有 LoRA 优化器:(i) 使用何种 J_{G}^* {F}_t J_{G} 的可逆代理,以及 (ii) 在 {W} 上使用何种 {F}_t。现有方法沿这些轴占据四个家族:因子空间自适应更新、J_{G}^* J_{G} 的块对角代理、Frobenius 残差伪逆方法以及黎曼流形约束。在该设计空间中,一种采用梯度统计感知的 {F}_t 并结合闭式因子空间求解(内存复杂度为 {O}((m+n)r))的方案仍未得到充分探索。我们提出 AdaPreLoRA,通过采用 Adafactor 对角 Kronecker 预条件子 {H}_t 作用于 {W},并从所得因子空间解族中选择使 {H}_t 加权下两个因子贡献之间不平衡性最小化的元素,从而填补这一空白;根据构造,所得因子更新是在 {H}_t 加权范数下最接近预条件后 {W} 空间方向的 LoRA 近似。在 GPT-2(E2E)、Mistral-7B 和 Qwen2-7B(GLUE、ARC、GSM8K)以及扩散模型个性化任务上,AdaPreLoRA 在保持 LoRA 优化器级别的峰值 GPU 内存的同时,与一组具有代表性的 LoRA 优化器相比具有竞争力或取得改进。
English
Low-Rank Adaptation (LoRA) reparameterizes a weight update as a product of two low-rank factors, but the Jacobian J_{G} of the generator mapping the factors to the weight matrix is rank-deficient, so the factor-space preconditioner J_{G}^* {F}_t J_{G} induced by any {W}-space preconditioner {F}_t is singular, and consequently the standard chain rule cannot be uniquely inverted to map a preconditioned {W}-space direction back to a factor-space update. We cast existing LoRA optimizers in a unified framework parameterized by two choices: (i) which invertible surrogate for J_{G}^* {F}_t J_{G} to use, and (ii) which {F}_t on {W} to use. Existing methods occupy four families along these axes: factor-space adaptive updates, block-diagonal surrogates for J_{G}^* J_{G}, Frobenius-residual pseudoinverse methods, and Riemannian manifold constraint. Within this design space, a gradient-statistics-aware {F}_t paired with a closed-form factor-space solve at {O}((m+n)r) memory remains underexplored. We propose AdaPreLoRA, which fills this gap by adopting the Adafactor diagonal Kronecker preconditioner {H}_t on {W} and selecting from the resulting factor-space solution family the element minimizing an {H}_t-weighted imbalance between the two factor contributions; by construction, the resulting factor update is the closest LoRA approximation to the preconditioned {W}-space direction under the {H}_t-weighted norm. Across GPT-2 (E2E), Mistral-7B and Qwen2-7B (GLUE, ARC, GSM8K), and diffusion-model personalization, AdaPreLoRA is competitive with or improves over a representative set of LoRA optimizers while keeping peak GPU memory at the LoRA optimizer level.
PDF21May 14, 2026