ChatPaper.aiChatPaper

IF-RewardBench:面向指令遵循评估的裁判模型基准测试框架

IF-RewardBench: Benchmarking Judge Models for Instruction-Following Evaluation

March 5, 2026
作者: Bosi Wen, Yilin Niu, Cunxiang Wang, Xiaoying Ling, Ying Zhang, Pei Ke, Hongning Wang, Minlie Huang
cs.AI

摘要

指令遵循能力是大语言模型(LLMs)的基础能力,其提升依赖于可扩展且准确的评判模型反馈。然而,由于现有元评估基准存在若干缺陷——如数据覆盖不足、与模型优化场景不匹配的过度简化的成对评估范式等,当前评判模型在指令遵循任务中的可靠性仍待深入探究。为此,我们提出IF-RewardBench,一个面向指令遵循任务的综合性元评估基准,涵盖多样化的指令类型与约束条件。针对每条指令,我们基于指令遵循质量构建包含多个响应间所有成对偏好的偏好图。该设计实现了列表式评估范式,可评估评判模型对多个响应进行排序的能力,这对指导模型对齐至关重要。在IF-RewardBench上的大量实验表明,当前评判模型存在显著缺陷,同时证明相较于现有基准,我们的基准与下游任务性能呈现更强的正相关性。代码与数据详见https://github.com/thu-coai/IF-RewardBench。
English
Instruction-following is a foundational capability of large language models (LLMs), with its improvement hinging on scalable and accurate feedback from judge models. However, the reliability of current judge models in instruction-following remains underexplored due to several deficiencies of existing meta-evaluation benchmarks, such as their insufficient data coverage and oversimplified pairwise evaluation paradigms that misalign with model optimization scenarios. To this end, we propose IF-RewardBench, a comprehensive meta-evaluation benchmark for instruction-following that covers diverse instruction and constraint types. For each instruction, we construct a preference graph containing all pairwise preferences among multiple responses based on instruction-following quality. This design enables a listwise evaluation paradigm that assesses the capabilities of judge models to rank multiple responses, which is essential in guiding model alignment. Extensive experiments on IF-RewardBench reveal significant deficiencies in current judge models and demonstrate that our benchmark achieves a stronger positive correlation with downstream task performance compared to existing benchmarks. Our codes and data are available at https://github.com/thu-coai/IF-RewardBench.
PDF12May 8, 2026