ChatPaper.aiChatPaper

評估對齊和漏洞:以LLMs為法官进行評估

Judging the Judges: Evaluating Alignment and Vulnerabilities in LLMs-as-Judges

June 18, 2024
作者: Aman Singh Thakur, Kartik Choudhary, Venkat Srinik Ramayapally, Sankaran Vaidyanathan, Dieuwke Hupkes
cs.AI

摘要

LLM作為評判範式為應對與人類評估相關的可擴展性挑戰提供了一個有前途的解決方案,迅速成為評估大型語言模型(LLMs)的方法之一。然而,對於這種範式的優勢和劣勢,以及可能存在的潛在偏見,仍有許多問題有待解答。在本文中,我們對各種LLM作為評判的表現進行了全面研究。我們利用TriviaQA作為評估LLMs客觀知識推理能力的基準,並將它們與我們發現具有高度互評者一致性的人類標註進行評估。我們的研究包括9個評判模型和9個考生模型,包括基本模型和指導調整模型。我們評估了評判模型在不同模型大小、系列和評判提示下的一致性。在其他結果中,我們的研究重新發現了使用Cohen's kappa作為一致性度量的重要性,而不是簡單的百分比一致性,顯示高百分比一致性的評判仍可能給出截然不同的分數。我們發現Llama-3 70B和GPT-4 Turbo與人類有著極佳的一致性,但在排名考生模型方面,它們被JudgeLM-7B和詞彙評判Contains超越,後者的人類一致性低至34分。通過錯誤分析和其他各種研究,包括指導長度和寬容偏見的影響,我們希望為未來在使用LLMs作為評判時提供寶貴的教訓。
English
Offering a promising solution to the scalability challenges associated with human evaluation, the LLM-as-a-judge paradigm is rapidly gaining traction as an approach to evaluating large language models (LLMs). However, there are still many open questions about the strengths and weaknesses of this paradigm, and what potential biases it may hold. In this paper, we present a comprehensive study of the performance of various LLMs acting as judges. We leverage TriviaQA as a benchmark for assessing objective knowledge reasoning of LLMs and evaluate them alongside human annotations which we found to have a high inter-annotator agreement. Our study includes 9 judge models and 9 exam taker models -- both base and instruction-tuned. We assess the judge model's alignment across different model sizes, families, and judge prompts. Among other results, our research rediscovers the importance of using Cohen's kappa as a metric of alignment as opposed to simple percent agreement, showing that judges with high percent agreement can still assign vastly different scores. We find that both Llama-3 70B and GPT-4 Turbo have an excellent alignment with humans, but in terms of ranking exam taker models, they are outperformed by both JudgeLM-7B and the lexical judge Contains, which have up to 34 points lower human alignment. Through error analysis and various other studies, including the effects of instruction length and leniency bias, we hope to provide valuable lessons for using LLMs as judges in the future.

Summary

AI-Generated Summary

PDF385November 29, 2024