ChatPaper.aiChatPaper

音频对话:用于音频和音乐理解的对话数据集

Audio Dialogues: Dialogues dataset for audio and music understanding

April 11, 2024
作者: Arushi Goel, Zhifeng Kong, Rafael Valle, Bryan Catanzaro
cs.AI

摘要

现有的用于音频理解的数据集主要侧重于单轮交互(即音频字幕,音频问答)来用自然语言描述音频,从而限制了通过交互式对话理解音频的能力。为了填补这一空白,我们引入了音频对话:一个包含163.8k个样本的多轮对话数据集,涵盖了一般音频和音乐。除了对话,音频对话还包含问题-答案对,以便理解和比较多个输入音频。音频对话采用基于提示的方法,并利用现有数据集的字幕注释,利用大型语言模型(LLM)生成多轮对话。我们在我们提出的数据集上评估现有的音频增强大型语言模型,以展示音频对话的复杂性和适用性。我们的生成数据集的代码将公开发布。详细提示和生成的对话可在演示网站https://audiodialogues.github.io/ 上找到。
English
Existing datasets for audio understanding primarily focus on single-turn interactions (i.e. audio captioning, audio question answering) for describing audio in natural language, thus limiting understanding audio via interactive dialogue. To address this gap, we introduce Audio Dialogues: a multi-turn dialogue dataset containing 163.8k samples for general audio sounds and music. In addition to dialogues, Audio Dialogues also has question-answer pairs to understand and compare multiple input audios together. Audio Dialogues leverages a prompting-based approach and caption annotations from existing datasets to generate multi-turn dialogues using a Large Language Model (LLM). We evaluate existing audio-augmented large language models on our proposed dataset to demonstrate the complexity and applicability of Audio Dialogues. Our code for generating the dataset will be made publicly available. Detailed prompts and generated dialogues can be found on the demo website https://audiodialogues.github.io/.

Summary

AI-Generated Summary

PDF161December 15, 2024