ChatPaper.aiChatPaper

SoundCTM:結合基於分數和一致性模型的文本轉語音生成

SoundCTM: Uniting Score-based and Consistency Models for Text-to-Sound Generation

May 28, 2024
作者: Koichi Saito, Dongjun Kim, Takashi Shibuya, Chieh-Hsin Lai, Zhi Zhong, Yuhta Takida, Yuki Mitsufuji
cs.AI

摘要

聲音內容是多媒體作品(如視頻遊戲、音樂和電影)中不可或缺的元素。最近基於高質量擴散的聲音生成模型可以作為創作者寶貴的工具。然而,儘管能夠生成高質量的聲音,這些模型通常面臨推理速度緩慢的問題。這一缺點給創作者帶來負擔,他們通常通過試驗和錯誤來調整聲音,以使其符合他們的藝術意圖。為了解決這個問題,我們引入了聲音一致性軌跡模型(SoundCTM)。我們的模型實現了在高質量單步聲音生成和多步生成之間靈活過渡。這使創作者可以通過單步樣本最初控制聲音,然後通過多步生成進行調整。儘管CTM基本上實現了靈活的單步和多步生成,但其出色性能在很大程度上依賴於額外的預訓練特徵提取器和對抗損失,這些訓練費用高昂,並且在其他領域並非總是可用。因此,我們重新構架了CTM的訓練框架,並通過利用教師網絡的蒸餾損失引入了一種新的特徵距離。此外,通過蒸餾無分類器引導的軌跡,我們同時訓練有條件和無條件的學生模型,在推理過程中在這些模型之間進行插值。我們還提出了無需訓練的可控SoundCTM框架,利用其靈活的採樣能力。SoundCTM實現了有前途的單步和多步實時聲音生成,而無需使用任何額外的現成網絡。此外,我們展示了SoundCTM在無需訓練的情況下實現可控聲音生成的能力。
English
Sound content is an indispensable element for multimedia works such as video games, music, and films. Recent high-quality diffusion-based sound generation models can serve as valuable tools for the creators. However, despite producing high-quality sounds, these models often suffer from slow inference speeds. This drawback burdens creators, who typically refine their sounds through trial and error to align them with their artistic intentions. To address this issue, we introduce Sound Consistency Trajectory Models (SoundCTM). Our model enables flexible transitioning between high-quality 1-step sound generation and superior sound quality through multi-step generation. This allows creators to initially control sounds with 1-step samples before refining them through multi-step generation. While CTM fundamentally achieves flexible 1-step and multi-step generation, its impressive performance heavily depends on an additional pretrained feature extractor and an adversarial loss, which are expensive to train and not always available in other domains. Thus, we reframe CTM's training framework and introduce a novel feature distance by utilizing the teacher's network for a distillation loss. Additionally, while distilling classifier-free guided trajectories, we train conditional and unconditional student models simultaneously and interpolate between these models during inference. We also propose training-free controllable frameworks for SoundCTM, leveraging its flexible sampling capability. SoundCTM achieves both promising 1-step and multi-step real-time sound generation without using any extra off-the-shelf networks. Furthermore, we demonstrate SoundCTM's capability of controllable sound generation in a training-free manner.

Summary

AI-Generated Summary

PDF90December 12, 2024