普羅米修斯 2:專精於評估其他語言模型的開源語言模型
Prometheus 2: An Open Source Language Model Specialized in Evaluating Other Language Models
May 2, 2024
作者: Seungone Kim, Juyoung Suk, Shayne Longpre, Bill Yuchen Lin, Jamin Shin, Sean Welleck, Graham Neubig, Moontae Lee, Kyungjae Lee, Minjoon Seo
cs.AI
摘要
諸如GPT-4等專有大語言模型常被用於評估各類語言模型生成回應的品質。然而,對於透明度、可控性及成本效益的顧慮,強烈推動了專門用於評估的開源語言模型的發展。現有的開源評估模型存在明顯不足:1) 其評分與人類評分存在顯著差異;2) 缺乏同時執行直接評估與成對排序(兩種最主流評估形式)的靈活性。此外,現有模型僅能針對通用屬性(如幫助性與無害性)進行評估,無法根據自訂標準進行客製化評判。為解決這些問題,我們推出Prometheus 2——相較前代更強大的評估專用語言模型,其評判結果與人類及GPT-4的判斷高度吻合。該模型不僅能處理直接評估與成對排序兩種格式,更可結合使用者自訂的評估準則進行分析。在四項直接評估基準與四項成對排序基準測試中,Prometheus 2在所有開源評估模型中,與人類及專有模型評判結果的相關性和一致性均位居榜首。我們的模型、程式碼與資料已公開於:https://github.com/prometheus-eval/prometheus-eval。
English
Proprietary LMs such as GPT-4 are often employed to assess the quality of
responses from various LMs. However, concerns including transparency,
controllability, and affordability strongly motivate the development of
open-source LMs specialized in evaluations. On the other hand, existing open
evaluator LMs exhibit critical shortcomings: 1) they issue scores that
significantly diverge from those assigned by humans, and 2) they lack the
flexibility to perform both direct assessment and pairwise ranking, the two
most prevalent forms of assessment. Additionally, they do not possess the
ability to evaluate based on custom evaluation criteria, focusing instead on
general attributes like helpfulness and harmlessness. To address these issues,
we introduce Prometheus 2, a more powerful evaluator LM than its predecessor
that closely mirrors human and GPT-4 judgements. Moreover, it is capable of
processing both direct assessment and pair-wise ranking formats grouped with a
user-defined evaluation criteria. On four direct assessment benchmarks and four
pairwise ranking benchmarks, Prometheus 2 scores the highest correlation and
agreement with humans and proprietary LM judges among all tested open evaluator
LMs. Our models, code, and data are all publicly available at
https://github.com/prometheus-eval/prometheus-eval.