TC-Light:动态长视频的时序一致性重光照技术
TC-Light: Temporally Consistent Relighting for Dynamic Long Videos
June 23, 2025
作者: Yang Liu, Chuanchen Luo, Zimo Tang, Yingyan Li, Yuran Yang, Yuanyong Ning, Lue Fan, Junran Peng, Zhaoxiang Zhang
cs.AI
摘要
在复杂动态的长视频中编辑光照对于多种下游任务具有重要价值,包括视觉内容创作与操控,以及通过模拟到现实(sim2real)和现实到现实(real2real)转换来扩展具身AI的数据规模。然而,现有的视频重光照技术大多局限于肖像视频,或面临时间一致性和计算效率的瓶颈。本文提出TC-Light,一种以两阶段后优化机制为特征的新颖范式。该方法首先利用膨胀视频重光照模型对视频进行初步重光照处理,第一阶段优化外观嵌入以对齐全局光照,随后在第二阶段优化提出的规范视频表示——唯一视频张量(UVT),以对齐细粒度纹理和光照。为了全面评估性能,我们还建立了一个长且高度动态的视频基准。大量实验表明,我们的方法能够实现物理上可信的重光照结果,具有卓越的时间一致性和较低的计算成本。代码和视频演示可在https://dekuliutesla.github.io/tclight/获取。
English
Editing illumination in long videos with complex dynamics has significant
value in various downstream tasks, including visual content creation and
manipulation, as well as data scaling up for embodied AI through sim2real and
real2real transfer. Nevertheless, existing video relighting techniques are
predominantly limited to portrait videos or fall into the bottleneck of
temporal consistency and computation efficiency. In this paper, we propose
TC-Light, a novel paradigm characterized by the proposed two-stage post
optimization mechanism. Starting from the video preliminarily relighted by an
inflated video relighting model, it optimizes appearance embedding in the first
stage to align global illumination. Then it optimizes the proposed canonical
video representation, i.e., Unique Video Tensor (UVT), to align fine-grained
texture and lighting in the second stage. To comprehensively evaluate
performance, we also establish a long and highly dynamic video benchmark.
Extensive experiments show that our method enables physically plausible
relighting results with superior temporal coherence and low computation cost.
The code and video demos are available at
https://dekuliutesla.github.io/tclight/.