ChatPaper.aiChatPaper

基于退化引导的一步图像超分辨率与扩散先验

Degradation-Guided One-Step Image Super-Resolution with Diffusion Priors

September 25, 2024
作者: Aiping Zhang, Zongsheng Yue, Renjing Pei, Wenqi Ren, Xiaochun Cao
cs.AI

摘要

基于扩散的图像超分辨率(SR)方法通过利用大型预训练的文本到图像扩散模型作为先验取得了显著成功。然而,这些方法仍然面临两个挑战:为了获得令人满意的结果需要数十个采样步骤,这限制了在实际场景中的效率,并且忽视了退化模型,这是解决SR问题中关键的辅助信息。在这项工作中,我们引入了一种新颖的一步SR模型,显著解决了基于扩散的SR方法的效率问题。与现有的微调策略不同,我们为SR专门设计了一个基于退化引导的低秩适应(LoRA)模块,根据从低分辨率图像中预估的退化信息纠正模型参数。这个模块不仅有助于强大的数据相关或退化相关SR模型,而且尽可能保留了预训练扩散模型的生成先验。此外,我们通过引入在线负样本生成策略量身定制了一种新颖的训练流程。结合推断过程中无分类器的引导策略,大大提高了超分辨率结果的感知质量。大量实验证明了所提出模型相对于最近的最先进方法具有卓越的效率和有效性。
English
Diffusion-based image super-resolution (SR) methods have achieved remarkable success by leveraging large pre-trained text-to-image diffusion models as priors. However, these methods still face two challenges: the requirement for dozens of sampling steps to achieve satisfactory results, which limits efficiency in real scenarios, and the neglect of degradation models, which are critical auxiliary information in solving the SR problem. In this work, we introduced a novel one-step SR model, which significantly addresses the efficiency issue of diffusion-based SR methods. Unlike existing fine-tuning strategies, we designed a degradation-guided Low-Rank Adaptation (LoRA) module specifically for SR, which corrects the model parameters based on the pre-estimated degradation information from low-resolution images. This module not only facilitates a powerful data-dependent or degradation-dependent SR model but also preserves the generative prior of the pre-trained diffusion model as much as possible. Furthermore, we tailor a novel training pipeline by introducing an online negative sample generation strategy. Combined with the classifier-free guidance strategy during inference, it largely improves the perceptual quality of the super-resolution results. Extensive experiments have demonstrated the superior efficiency and effectiveness of the proposed model compared to recent state-of-the-art methods.

Summary

AI-Generated Summary

PDF135November 16, 2024