答案匹配優於多項選擇於語言模型評估
Answer Matching Outperforms Multiple Choice for Language Model Evaluation
July 3, 2025
作者: Nikhil Chandak, Shashwat Goel, Ameya Prabhu, Moritz Hardt, Jonas Geiping
cs.AI
摘要
多選題基準長期以來一直是語言模型評估的主力,因為評分多選題既客觀又易於自動化。然而,我們發現,即使不看問題,也往往能回答出流行基準中的多選題。這些捷徑源於判別式評估的一個根本性局限,而這種局限在模型自由生成答案的評估中並不存在。直到最近,似乎還沒有可行的、可擴展的多選題替代方案——但我們表明,這一情況已發生改變。我們考慮通過所謂的答案匹配進行生成式評估:給候選模型提供不含選項的問題,讓其生成自由形式的回答,然後使用帶有參考答案的現代語言模型來判斷回答是否與參考答案匹配。為了比較不同評估策略的有效性,我們對MMLU-Pro和GPQA-Diamond進行了註釋,以獲取人工評分數據,並測量每種評估方法的一致性。我們發現,使用最新模型(即使是小型模型)進行答案匹配,其一致性接近完美,處於評分者間一致性的範圍內。相比之下,無論是多選題評估,還是在沒有參考答案的情況下使用LLM作為評判者,其與人工評分的一致性都較差。通過答案匹配改進評估不僅僅是一個概念上的問題:當使用答案匹配評估模型的自由生成回答時,多個模型的排名發生了顯著變化。基於這些發現,我們討論了如何將評估生態系統從多選題轉向答案匹配。
English
Multiple choice benchmarks have long been the workhorse of language model
evaluation because grading multiple choice is objective and easy to automate.
However, we show multiple choice questions from popular benchmarks can often be
answered without even seeing the question. These shortcuts arise from a
fundamental limitation of discriminative evaluation not shared by evaluations
of the model's free-form, generative answers. Until recently, there appeared to
be no viable, scalable alternative to multiple choice--but, we show that this
has changed. We consider generative evaluation via what we call answer
matching: Give the candidate model the question without the options, have it
generate a free-form response, then use a modern language model with the
reference answer to determine if the response matches the reference. To compare
the validity of different evaluation strategies, we annotate MMLU-Pro and
GPQA-Diamond to obtain human grading data, and measure the agreement of each
evaluation approach. We find answer matching using recent models--even small
ones--achieves near-perfect agreement, in the range of inter-annotator
agreement. In contrast, both multiple choice evaluation and using
LLM-as-a-judge without reference answers aligns poorly with human grading.
Improving evaluations via answer matching is not merely a conceptual concern:
the rankings of several models change significantly when evaluating their
free-form responses with answer matching. In light of these findings, we
discuss how to move the evaluation ecosystem from multiple choice to answer
matching.