ChatPaper.aiChatPaper

IF-RewardBench:面向指令遵循评估的裁判模型基准测试

IF-RewardBench: Benchmarking Judge Models for Instruction-Following Evaluation

March 5, 2026
作者: Bosi Wen, Yilin Niu, Cunxiang Wang, Xiaoying Ling, Ying Zhang, Pei Ke, Hongning Wang, Minlie Huang
cs.AI

摘要

指令遵循能力是大语言模型(LLM)的基础能力,其提升依赖于评估模型提供的可扩展且精准的反馈。然而,由于现有元评估基准存在数据覆盖不足、评估范式过度简化导致与模型优化场景失配等缺陷,当前评估模型在指令遵循任务中的可靠性仍待深入探究。为此,我们提出IF-RewardBench——一个覆盖多样化指令类型与约束条件的指令遵循元评估基准。针对每条指令,我们基于指令遵循质量构建包含多个响应间完整两两偏好的偏好图。该设计实现了列表式评估范式,可检验评估模型对多个响应进行排序的能力,这对指导模型对齐至关重要。在IF-RewardBench上的大量实验表明,当前评估模型存在显著缺陷,且相较于现有基准,本基准与下游任务性能呈现更强的正相关性。代码与数据已开源:https://github.com/thu-coai/IF-RewardBench。
English
Instruction-following is a foundational capability of large language models (LLMs), with its improvement hinging on scalable and accurate feedback from judge models. However, the reliability of current judge models in instruction-following remains underexplored due to several deficiencies of existing meta-evaluation benchmarks, such as their insufficient data coverage and oversimplified pairwise evaluation paradigms that misalign with model optimization scenarios. To this end, we propose IF-RewardBench, a comprehensive meta-evaluation benchmark for instruction-following that covers diverse instruction and constraint types. For each instruction, we construct a preference graph containing all pairwise preferences among multiple responses based on instruction-following quality. This design enables a listwise evaluation paradigm that assesses the capabilities of judge models to rank multiple responses, which is essential in guiding model alignment. Extensive experiments on IF-RewardBench reveal significant deficiencies in current judge models and demonstrate that our benchmark achieves a stronger positive correlation with downstream task performance compared to existing benchmarks. Our codes and data are available at https://github.com/thu-coai/IF-RewardBench.
PDF12May 8, 2026