ChatPaper.aiChatPaper

視訊物件分割感知音訊生成

Video Object Segmentation-Aware Audio Generation

September 30, 2025
作者: Ilpo Viertola, Vladimir Iashin, Esa Rahtu
cs.AI

摘要

现有的多模态音频生成模型往往缺乏精确的用户控制,这限制了其在专业拟音工作流程中的适用性。特别是,这些模型侧重于整个视频,并未提供针对场景中特定对象进行优先处理的精确方法,导致生成不必要的背景声音或聚焦于错误的对象。为解决这一不足,我们引入了视频对象分割感知音频生成这一新颖任务,该任务明确地将声音合成条件建立在对象级分割图上。我们提出了SAGANet,一种新的多模态生成模型,通过利用视觉分割掩码以及视频和文本线索,实现了可控的音频生成。我们的模型为用户提供了细粒度且视觉定位的音频生成控制。为支持此任务及进一步研究分割感知拟音,我们提出了Segmented Music Solos,一个包含分割信息的乐器演奏视频基准数据集。我们的方法相较于当前最先进技术展现了显著改进,并为可控、高保真拟音合成设立了新标准。代码、样本及Segmented Music Solos可在https://saganet.notion.site获取。
English
Existing multimodal audio generation models often lack precise user control, which limits their applicability in professional Foley workflows. In particular, these models focus on the entire video and do not provide precise methods for prioritizing a specific object within a scene, generating unnecessary background sounds, or focusing on the wrong objects. To address this gap, we introduce the novel task of video object segmentation-aware audio generation, which explicitly conditions sound synthesis on object-level segmentation maps. We present SAGANet, a new multimodal generative model that enables controllable audio generation by leveraging visual segmentation masks along with video and textual cues. Our model provides users with fine-grained and visually localized control over audio generation. To support this task and further research on segmentation-aware Foley, we propose Segmented Music Solos, a benchmark dataset of musical instrument performance videos with segmentation information. Our method demonstrates substantial improvements over current state-of-the-art methods and sets a new standard for controllable, high-fidelity Foley synthesis. Code, samples, and Segmented Music Solos are available at https://saganet.notion.site
PDF11October 1, 2025