ChatPaper.aiChatPaper

生成式重聚焦:从单幅图像实现灵活散焦控制

Generative Refocusing: Flexible Defocus Control from a Single Image

December 18, 2025
作者: Chun-Wei Tuan Mu, Jia-Bin Huang, Yu-Lun Liu
cs.AI

摘要

景深控制在摄影中至关重要,但获得完美对焦往往需要多次尝试或特殊设备。单图像重对焦技术仍面临挑战,其核心在于恢复清晰内容并生成逼真的虚化效果。现有方法存在明显局限:需要全焦输入、依赖模拟器生成的合成数据,且光圈控制能力有限。我们提出生成式重对焦技术,采用DeblurNet从多样化输入中恢复全焦图像,再通过BokehNet实现可控虚化的两步处理流程。本研究的核心创新是半监督训练方法,通过结合合成配对数据与未配对的真实虚化图像,利用EXIF元数据捕捉超越模拟器能力的真实光学特性。实验表明,我们的方法在散焦去模糊、虚化合成和重对焦基准测试中均达到最优性能。此外,生成式重对焦技术支持文本引导的参数调整和自定义光圈形状。
English
Depth-of-field control is essential in photography, but getting the perfect focus often takes several tries or special equipment. Single-image refocusing is still difficult. It involves recovering sharp content and creating realistic bokeh. Current methods have significant drawbacks. They need all-in-focus inputs, depend on synthetic data from simulators, and have limited control over aperture. We introduce Generative Refocusing, a two-step process that uses DeblurNet to recover all-in-focus images from various inputs and BokehNet for creating controllable bokeh. Our main innovation is semi-supervised training. This method combines synthetic paired data with unpaired real bokeh images, using EXIF metadata to capture real optical characteristics beyond what simulators can provide. Our experiments show we achieve top performance in defocus deblurring, bokeh synthesis, and refocusing benchmarks. Additionally, our Generative Refocusing allows text-guided adjustments and custom aperture shapes.
PDF272December 20, 2025