ChatPaper.aiChatPaper

普罗米修斯2:专精于评估其他语言模型的开放源代码语言模型

Prometheus 2: An Open Source Language Model Specialized in Evaluating Other Language Models

May 2, 2024
作者: Seungone Kim, Juyoung Suk, Shayne Longpre, Bill Yuchen Lin, Jamin Shin, Sean Welleck, Graham Neubig, Moontae Lee, Kyungjae Lee, Minjoon Seo
cs.AI

摘要

诸如GPT-4等专有语言模型常被用于评估各类语言模型的应答质量。然而,透明度、可控性及成本效益等方面的顾虑强烈推动了专门用于评估的开源语言模型的研发。现有开源评估模型存在明显缺陷:1)其评分与人类评分存在显著偏差;2)缺乏同时执行直接评估和配对排序(两种最主流评估形式)的灵活性。此外,这些模型无法基于定制化评估标准进行评判,仅能聚焦于帮助性、无害性等通用属性。为解决这些问题,我们推出Prometheus 2——相较于前代更强大的评估语言模型,其评估结果与人类及GPT-4的判断高度吻合。该模型不仅能处理直接评估和配对排序两种模式,还可结合用户自定义的评估标准进行综合分析。在四项直接评估基准和四项配对排序基准测试中,Prometheus 2在所有开源评估模型中取得了与人类及专有模型评估者最高的相关性评分和一致率。我们的模型、代码及数据均已公开于https://github.com/prometheus-eval/prometheus-eval。
English
Proprietary LMs such as GPT-4 are often employed to assess the quality of responses from various LMs. However, concerns including transparency, controllability, and affordability strongly motivate the development of open-source LMs specialized in evaluations. On the other hand, existing open evaluator LMs exhibit critical shortcomings: 1) they issue scores that significantly diverge from those assigned by humans, and 2) they lack the flexibility to perform both direct assessment and pairwise ranking, the two most prevalent forms of assessment. Additionally, they do not possess the ability to evaluate based on custom evaluation criteria, focusing instead on general attributes like helpfulness and harmlessness. To address these issues, we introduce Prometheus 2, a more powerful evaluator LM than its predecessor that closely mirrors human and GPT-4 judgements. Moreover, it is capable of processing both direct assessment and pair-wise ranking formats grouped with a user-defined evaluation criteria. On four direct assessment benchmarks and four pairwise ranking benchmarks, Prometheus 2 scores the highest correlation and agreement with humans and proprietary LM judges among all tested open evaluator LMs. Our models, code, and data are all publicly available at https://github.com/prometheus-eval/prometheus-eval.
PDF12411February 8, 2026