ChatPaper.aiChatPaper

通过GRPO提升语音感知语言模型中的语音理解能力

Advancing Speech Understanding in Speech-Aware Language Models with GRPO

September 21, 2025
作者: Avishai Elmakies, Hagai Aronowitz, Nimrod Shabtay, Eli Schwartz, Ron Hoory, Avihu Dekel
cs.AI

摘要

本文提出了一种基于群体相对策略优化(GRPO)的方法,用于训练面向开放格式语音理解任务(如口语问答和自动语音翻译)的语音感知大语言模型(SALLMs)。SALLMs在语音理解任务中已展现出显著成效。GRPO因其在大语言模型训练中的高效性而受到关注,先前研究已探讨了其在SALLMs上的应用,主要集中在多项选择题型上。在此基础上,我们聚焦于更能体现模型生成能力的开放格式任务。该方法利用GRPO结合BLEU作为奖励信号来优化SALLMs,并通过实验证明其在多项关键指标上超越了标准的监督微调(SFT)。最后,我们探索了在这些任务中引入离策略样本的潜力,为未来改进和研究指明了方向。
English
In this paper, we introduce a Group Relative Policy Optimization (GRPO)-based method for training Speech-Aware Large Language Models (SALLMs) on open-format speech understanding tasks, such as Spoken Question Answering and Automatic Speech Translation. SALLMs have proven highly effective for speech understanding tasks. GRPO has recently gained traction for its efficiency in training LLMs, and prior work has explored its application to SALLMs, primarily in multiple-choice tasks. Building on this, we focus on open-format tasks that better reflect the generative abilities of the models. Our approach leverages GRPO with BLEU as the reward signal to optimize SALLMs, and we demonstrate empirically that it surpasses standard SFT across several key metrics. Finally, we explore the potential of incorporating off-policy samples within GRPO for these tasks, highlighting avenues for further improvement and further research.
PDF142September 25, 2025