ChatPaper.aiChatPaper

TEDi:用於長期運動合成的時間交錯擴散

TEDi: Temporally-Entangled Diffusion for Long-Term Motion Synthesis

July 27, 2023
作者: Zihan Zhang, Richard Liu, Kfir Aberman, Rana Hanocka
cs.AI

摘要

在去噪擴散概率模型(DDPM)中,逐步合成樣本的擴散過程具有關鍵性質,該過程在圖像合成方面呈現了前所未有的質量,並最近在運動領域中得到了探索。在這項工作中,我們提議將逐步擴散概念(沿著擴散時間軸運作)適應到運動序列的時間軸中。我們的主要想法是擴展DDPM框架以支持時間變化的去噪,從而將這兩個軸纏繞在一起。使用我們的特殊公式,我們迭代地對包含一組越來越噪聲姿勢的運動緩衝區進行去噪,該過程自回歸地生成任意長的幀流。在固定的擴散時間軸中,在每個擴散步驟中,我們僅增加運動的時間軸,使框架生成一幅新的乾淨幀,該幀從緩衝區的開頭移除,然後附加一個新繪製的噪聲向量。這種新機制為長期運動合成打開了一條新途徑,可應用於角色動畫和其他領域。
English
The gradual nature of a diffusion process that synthesizes samples in small increments constitutes a key ingredient of Denoising Diffusion Probabilistic Models (DDPM), which have presented unprecedented quality in image synthesis and been recently explored in the motion domain. In this work, we propose to adapt the gradual diffusion concept (operating along a diffusion time-axis) into the temporal-axis of the motion sequence. Our key idea is to extend the DDPM framework to support temporally varying denoising, thereby entangling the two axes. Using our special formulation, we iteratively denoise a motion buffer that contains a set of increasingly-noised poses, which auto-regressively produces an arbitrarily long stream of frames. With a stationary diffusion time-axis, in each diffusion step we increment only the temporal-axis of the motion such that the framework produces a new, clean frame which is removed from the beginning of the buffer, followed by a newly drawn noise vector that is appended to it. This new mechanism paves the way towards a new framework for long-term motion synthesis with applications to character animation and other domains.
PDF80December 15, 2024