利用擴散先驗進行真實世界影像超分辨率
Exploiting Diffusion Prior for Real-World Image Super-Resolution
May 11, 2023
作者: Jianyi Wang, Zongsheng Yue, Shangchen Zhou, Kelvin C. K. Chan, Chen Change Loy
cs.AI
摘要
我們提出了一種新方法,利用預先訓練的文本到圖像擴散模型中封裝的先前知識,用於盲目超分辨率(SR)。具體來說,通過使用我們的時間感知編碼器,我們可以在不改變預先訓練的合成模型的情況下實現有希望的恢復結果,從而保留生成先驗並最小化訓練成本。為了補救擴散模型固有隨機性引起的保真度損失,我們引入了一個可控的特徵包裝模塊,允許用戶在推斷過程中通過簡單調整一個純量值來平衡質量和保真度。此外,我們開發了一種漸進聚合採樣策略,以克服預先訓練的擴散模型的固定尺寸限制,實現對任何尺寸的分辨率的適應。通過使用合成和真實世界基準的全面評估,我們的方法展示了其優越性,勝過當前最先進的方法。
English
We present a novel approach to leverage prior knowledge encapsulated in
pre-trained text-to-image diffusion models for blind super-resolution (SR).
Specifically, by employing our time-aware encoder, we can achieve promising
restoration results without altering the pre-trained synthesis model, thereby
preserving the generative prior and minimizing training cost. To remedy the
loss of fidelity caused by the inherent stochasticity of diffusion models, we
introduce a controllable feature wrapping module that allows users to balance
quality and fidelity by simply adjusting a scalar value during the inference
process. Moreover, we develop a progressive aggregation sampling strategy to
overcome the fixed-size constraints of pre-trained diffusion models, enabling
adaptation to resolutions of any size. A comprehensive evaluation of our method
using both synthetic and real-world benchmarks demonstrates its superiority
over current state-of-the-art approaches.