TC-Light:動態長視頻的時序一致性重光照技術
TC-Light: Temporally Consistent Relighting for Dynamic Long Videos
June 23, 2025
作者: Yang Liu, Chuanchen Luo, Zimo Tang, Yingyan Li, Yuran Yang, Yuanyong Ning, Lue Fan, Junran Peng, Zhaoxiang Zhang
cs.AI
摘要
在長視頻中進行複雜動態的照明編輯,對於視覺內容創作與操控,以及通過模擬到現實(sim2real)和現實到現實(real2real)轉換來擴展具身AI數據規模等多種下游任務具有重要價值。然而,現有的視頻重照明技術主要局限於肖像視頻,或陷入時間一致性和計算效率的瓶頸。本文提出了一種新穎的範式——TC-Light,其特點在於引入的兩階段後優化機制。該方法從一個初步由膨脹視頻重照明模型處理的視頻出發,第一階段優化外觀嵌入以對齊全局照明,第二階段則優化提出的規範視頻表示——獨特視頻張量(UVT),以對齊細粒度紋理和照明。為了全面評估性能,我們還建立了一個長且高度動態的視頻基準。大量實驗表明,我們的方法能夠實現物理上可信的重照明結果,並具有優越的時間一致性和低計算成本。代碼和視頻演示可在https://dekuliutesla.github.io/tclight/獲取。
English
Editing illumination in long videos with complex dynamics has significant
value in various downstream tasks, including visual content creation and
manipulation, as well as data scaling up for embodied AI through sim2real and
real2real transfer. Nevertheless, existing video relighting techniques are
predominantly limited to portrait videos or fall into the bottleneck of
temporal consistency and computation efficiency. In this paper, we propose
TC-Light, a novel paradigm characterized by the proposed two-stage post
optimization mechanism. Starting from the video preliminarily relighted by an
inflated video relighting model, it optimizes appearance embedding in the first
stage to align global illumination. Then it optimizes the proposed canonical
video representation, i.e., Unique Video Tensor (UVT), to align fine-grained
texture and lighting in the second stage. To comprehensively evaluate
performance, we also establish a long and highly dynamic video benchmark.
Extensive experiments show that our method enables physically plausible
relighting results with superior temporal coherence and low computation cost.
The code and video demos are available at
https://dekuliutesla.github.io/tclight/.