基于对数编码的隐空间对齐HDR视频生成
HDR Video Generation via Latent Alignment with Logarithmic Encoding
April 13, 2026
作者: Naomi Ken Korem, Mohamed Oumoumad, Harel Cain, Matan Ben Yosef, Urska Jelercic, Ofir Bibi, Yaron Inger, Or Patashnik, Daniel Cohen-Or
cs.AI
摘要
高动态范围(HDR)影像能够丰富而真实地呈现场景辐射度,但由于其与生成模型训练时所使用的有界感知压缩数据不匹配,对该类模型的生成任务仍具挑战性。传统解决方案是为HDR学习新的表征方式,但这会引入额外的复杂性和数据需求。本研究提出一种更简捷的HDR生成方法:利用预训练生成模型已捕获的强视觉先验。我们发现,电影工业管线中广泛采用的对数编码方式可将HDR影像映射至与这些模型潜在空间自然对齐的分布,仅需轻量级微调即可直接适配,无需重新训练编码器。为复原输入中不可直接观测的细节,我们进一步引入基于相机模拟退化的训练策略,促使模型从已学先验中推断缺失的高动态范围内容。结合这些创新点,我们使用经最小化适配的预训练视频模型实现了高质量HDR视频生成,在多样化场景和复杂光照条件下均取得显著成果。研究表明,尽管HDR代表完全不同的成像机制,只要选择与其学习先验对齐的表征方式,无需重构生成模型即可有效处理HDR内容。
English
High dynamic range (HDR) imagery offers a rich and faithful representation of scene radiance, but remains challenging for generative models due to its mismatch with the bounded, perceptually compressed data on which these models are trained. A natural solution is to learn new representations for HDR, which introduces additional complexity and data requirements. In this work, we show that HDR generation can be achieved in a much simpler way by leveraging the strong visual priors already captured by pretrained generative models. We observe that a logarithmic encoding widely used in cinematic pipelines maps HDR imagery into a distribution that is naturally aligned with the latent space of these models, enabling direct adaptation via lightweight fine-tuning without retraining an encoder. To recover details that are not directly observable in the input, we further introduce a training strategy based on camera-mimicking degradations that encourages the model to infer missing high dynamic range content from its learned priors. Combining these insights, we demonstrate high-quality HDR video generation using a pretrained video model with minimal adaptation, achieving strong results across diverse scenes and challenging lighting conditions. Our results indicate that HDR, despite representing a fundamentally different image formation regime, can be handled effectively without redesigning generative models, provided that the representation is chosen to align with their learned priors.