ChatPaper.aiChatPaper

生成式重聚焦:從單張影像實現靈活的散景控制

Generative Refocusing: Flexible Defocus Control from a Single Image

December 18, 2025
作者: Chun-Wei Tuan Mu, Jia-Bin Huang, Yu-Lun Liu
cs.AI

摘要

在攝影領域,景深控制至關重要,但獲得完美對焦往往需要多次嘗試或特殊設備。單圖像重對焦技術仍面臨挑戰,其涉及恢復清晰內容與創造自然散景兩大難題。現有方法存在明顯缺陷:需要全對焦輸入圖像、依賴模擬器生成的合成數據,且對光圈控制能力有限。我們提出生成式重對焦技術,採用DeblurNet從各類輸入恢復全對焦圖像,再通過BokehNet實現可控散景的兩步流程。核心創新在於半監督訓練法,該方法融合合成配對數據與未配對真實散景圖像,利用EXIF元數據捕捉模擬器無法提供的真實光學特性。實驗表明,我們的模型在散焦去模糊、散景合成和重對焦基準測試中均達到頂尖水平。此外,生成式重對焦技術還支持文字引導調整與自定義光圈形狀功能。
English
Depth-of-field control is essential in photography, but getting the perfect focus often takes several tries or special equipment. Single-image refocusing is still difficult. It involves recovering sharp content and creating realistic bokeh. Current methods have significant drawbacks. They need all-in-focus inputs, depend on synthetic data from simulators, and have limited control over aperture. We introduce Generative Refocusing, a two-step process that uses DeblurNet to recover all-in-focus images from various inputs and BokehNet for creating controllable bokeh. Our main innovation is semi-supervised training. This method combines synthetic paired data with unpaired real bokeh images, using EXIF metadata to capture real optical characteristics beyond what simulators can provide. Our experiments show we achieve top performance in defocus deblurring, bokeh synthesis, and refocusing benchmarks. Additionally, our Generative Refocusing allows text-guided adjustments and custom aperture shapes.
PDF272December 20, 2025