RLBFF:二進制靈活反饋機制,架起人類反饋與可驗證獎勵之間的橋樑
RLBFF: Binary Flexible Feedback to bridge between Human Feedback & Verifiable Rewards
September 25, 2025
作者: Zhilin Wang, Jiaqi Zeng, Olivier Delalleau, Ellie Evans, Daniel Egert, Hoo-Chang Shin, Felipe Soares, Yi Dong, Oleksii Kuchaiev
cs.AI
摘要
基於人類反饋的強化學習(RLHF)與基於可驗證獎勵的強化學習(RLVR)是大型語言模型(LLM)後訓練階段採用的主要強化學習範式,各自具備獨特優勢。然而,RLHF因依賴缺乏明確標準的人類判斷,在可解釋性和獎勵欺詐方面面臨挑戰;而RLVR則因其專注於基於正確性的驗證器,在應用範圍上受到限制。我們提出了一種基於二元靈活反饋的強化學習(RLBFF),它結合了人類驅動偏好的多樣性與基於規則驗證的精確性,使獎勵模型能夠捕捉超越單純正確性的回應質量細微之處。RLBFF從自然語言反饋中提取可二元回答的原則(例如,信息準確性:是,或代碼可讀性:否),這些原則隨後可用於將獎勵模型訓練作為一個蘊含任務(回應滿足或不滿足任意原則)。我們展示,在數據匹配的情況下,以此方式訓練的獎勵模型能夠超越Bradley-Terry模型,並在RM-Bench(86.2%)和JudgeBench(81.4%,截至2025年9月24日位居榜首)上取得頂尖性能。此外,與Bradley-Terry模型不同,用戶可在推理時指定感興趣的原則,以定制我們獎勵模型的關注點。最後,我們提供了一套完全開源的方案(包括數據),利用RLBFF和我們的獎勵模型對齊Qwen3-32B,使其在MT-Bench、WildBench和Arena Hard v2等通用對齊基準上匹配或超越o3-mini和DeepSeek R1的性能(推理成本低於5%)。
English
Reinforcement Learning with Human Feedback (RLHF) and Reinforcement Learning
with Verifiable Rewards (RLVR) are the main RL paradigms used in LLM
post-training, each offering distinct advantages. However, RLHF struggles with
interpretability and reward hacking because it relies on human judgments that
usually lack explicit criteria, whereas RLVR is limited in scope by its focus
on correctness-based verifiers. We propose Reinforcement Learning with Binary
Flexible Feedback (RLBFF), which combines the versatility of human-driven
preferences with the precision of rule-based verification, enabling reward
models to capture nuanced aspects of response quality beyond mere correctness.
RLBFF extracts principles that can be answered in a binary fashion (e.g.
accuracy of information: yes, or code readability: no) from natural language
feedback. Such principles can then be used to ground Reward Model training as
an entailment task (response satisfies or does not satisfy an arbitrary
principle). We show that Reward Models trained in this manner can outperform
Bradley-Terry models when matched for data and achieve top performance on
RM-Bench (86.2%) and JudgeBench (81.4%, #1 on leaderboard as of September 24,
2025). Additionally, users can specify principles of interest at inference time
to customize the focus of our reward models, in contrast to Bradley-Terry
models. Finally, we present a fully open source recipe (including data) to
align Qwen3-32B using RLBFF and our Reward Model, to match or exceed the
performance of o3-mini and DeepSeek R1 on general alignment benchmarks of
MT-Bench, WildBench, and Arena Hard v2 (at <5% of the inference cost).