ChatPaper.aiChatPaper

透過GRPO推進語音感知語言模型中的語音理解

Advancing Speech Understanding in Speech-Aware Language Models with GRPO

September 21, 2025
作者: Avishai Elmakies, Hagai Aronowitz, Nimrod Shabtay, Eli Schwartz, Ron Hoory, Avihu Dekel
cs.AI

摘要

本文提出了一種基於群組相對策略優化(Group Relative Policy Optimization, GRPO)的方法,用於訓練語音感知大型語言模型(Speech-Aware Large Language Models, SALLMs)於開放格式的語音理解任務,如口語問答與自動語音翻譯。SALLMs在語音理解任務中已展現出極高的效能。GRPO近期因其在訓練大型語言模型中的效率而受到關注,先前的研究已探討了其在SALLMs中的應用,主要集中於多選題任務。基於此,我們聚焦於更能反映模型生成能力的開放格式任務。我們的方法利用GRPO並以BLEU作為獎勵信號來優化SALLMs,並通過實證數據顯示,該方法在多項關鍵指標上超越了標準的監督微調(Supervised Fine-Tuning, SFT)。最後,我們探討了在GRPO中引入離策略樣本於這些任務中的潛力,指出了進一步改進與研究的方向。
English
In this paper, we introduce a Group Relative Policy Optimization (GRPO)-based method for training Speech-Aware Large Language Models (SALLMs) on open-format speech understanding tasks, such as Spoken Question Answering and Automatic Speech Translation. SALLMs have proven highly effective for speech understanding tasks. GRPO has recently gained traction for its efficiency in training LLMs, and prior work has explored its application to SALLMs, primarily in multiple-choice tasks. Building on this, we focus on open-format tasks that better reflect the generative abilities of the models. Our approach leverages GRPO with BLEU as the reward signal to optimize SALLMs, and we demonstrate empirically that it surpasses standard SFT across several key metrics. Finally, we explore the potential of incorporating off-policy samples within GRPO for these tasks, highlighting avenues for further improvement and further research.
PDF142September 25, 2025