ChatPaper.aiChatPaper

ZipLoRA:通过高效融合LoRA实现任意主体与风格的结合

ZipLoRA: Any Subject in Any Style by Effectively Merging LoRAs

November 22, 2023
作者: Viraj Shah, Nataniel Ruiz, Forrester Cole, Erika Lu, Svetlana Lazebnik, Yuanzhen Li, Varun Jampani
cs.AI

摘要

针对概念驱动个性化的生成模型微调方法,通常在主体驱动或风格驱动的生成任务中取得优异效果。近期提出的低秩自适应(LoRA)技术为实现概念驱动个性化提供了一种参数高效的解决方案。虽然现有研究探索了组合多个独立LoRA模块以实现风格与主体联合生成的方法,但这些技术尚未能可靠解决该问题,往往需要在主体保真度或风格保真度之间做出妥协。我们提出ZipLoRA方法,通过经济高效地融合独立训练的风格与主体LoRA模块,实现任意用户指定主体与任意用户指定风格的组合生成。在多种主体与风格组合上的实验表明,ZipLoRA能够生成令人信服的结果,在主体和风格保真度方面较基线方法实现显著提升,同时保持场景重构能力。项目页面:https://ziplora.github.io
English
Methods for finetuning generative models for concept-driven personalization generally achieve strong results for subject-driven or style-driven generation. Recently, low-rank adaptations (LoRA) have been proposed as a parameter-efficient way of achieving concept-driven personalization. While recent work explores the combination of separate LoRAs to achieve joint generation of learned styles and subjects, existing techniques do not reliably address the problem; they often compromise either subject fidelity or style fidelity. We propose ZipLoRA, a method to cheaply and effectively merge independently trained style and subject LoRAs in order to achieve generation of any user-provided subject in any user-provided style. Experiments on a wide range of subject and style combinations show that ZipLoRA can generate compelling results with meaningful improvements over baselines in subject and style fidelity while preserving the ability to recontextualize. Project page: https://ziplora.github.io
PDF473March 22, 2026