ChatPaper.aiChatPaper

條件音訊生成的上下文提示編輯

In-Context Prompt Editing For Conditional Audio Generation

November 1, 2023
作者: Ernie Chang, Pin-Jie Lin, Yang Li, Sidd Srinivasan, Gael Le Lan, David Kant, Yangyang Shi, Forrest Iandola, Vikas Chandra
cs.AI

摘要

在部署機器學習模型時,分布轉移是一個核心挑戰,因為這些模型可能無法應對真實世界的數據。這在文本轉語音生成中尤為明顯,其中編碼表示很容易被未見過的提示所削弱,進而導致生成的音頻質量下降 -- 有限的文本-音頻配對集對於野外條件下的條件音頻生成仍然不足,因為用戶提示過於不明確。特別是,我們觀察到生成的音頻樣本在用戶提示下的音質持續下降,與訓練集提示相比。為此,我們提出了一個基於檢索的上下文提示編輯框架,利用訓練字幕作為示範樣本來重新審視用戶提示。我們展示了該框架提高了整個收集的用戶提示集的音質,這些提示是根據訓練字幕作為示範樣本進行編輯的。
English
Distributional shift is a central challenge in the deployment of machine learning models as they can be ill-equipped for real-world data. This is particularly evident in text-to-audio generation where the encoded representations are easily undermined by unseen prompts, which leads to the degradation of generated audio -- the limited set of the text-audio pairs remains inadequate for conditional audio generation in the wild as user prompts are under-specified. In particular, we observe a consistent audio quality degradation in generated audio samples with user prompts, as opposed to training set prompts. To this end, we present a retrieval-based in-context prompt editing framework that leverages the training captions as demonstrative exemplars to revisit the user prompts. We show that the framework enhanced the audio quality across the set of collected user prompts, which were edited with reference to the training captions as exemplars.
PDF111December 15, 2024