ChatPaper.aiChatPaper

评判法官:评估LLM作为法官的对齐性和脆弱性

Judging the Judges: Evaluating Alignment and Vulnerabilities in LLMs-as-Judges

June 18, 2024
作者: Aman Singh Thakur, Kartik Choudhary, Venkat Srinik Ramayapally, Sankaran Vaidyanathan, Dieuwke Hupkes
cs.AI

摘要

LLM作为评判者范式为解决与人类评估相关的可扩展性挑战提供了一种有前途的解决方案,迅速成为评估大型语言模型(LLMs)的一种方法。然而,关于这种范式的优势和劣势以及可能存在的潜在偏见仍有许多未解之谜。在本文中,我们对各种LLM作为评判者的表现进行了全面研究。我们利用TriviaQA作为评估LLM客观知识推理能力的基准,并将它们与我们发现具有较高互评者一致性的人类注释进行评估。我们的研究包括9个评判者模型和9个考生模型,包括基础模型和指导调整模型。我们评估了评判者模型在不同模型大小、系列和评判提示下的一致性。在其他结果中,我们的研究重新发现了使用Cohen's kappa作为一致性度量标准的重要性,而不是简单的百分比一致性,表明高百分比一致性的评判者仍可能给出截然不同的分数。我们发现Llama-3 70B和GPT-4 Turbo与人类的一致性非常好,但在排名考生模型方面,它们被JudgeLM-7B和词汇评判者Contains超越,后者的人类一致性低至34分。通过错误分析和其他各种研究,包括指导长度和宽容偏见的影响,我们希望为未来在使用LLM作为评判者时提供宝贵的经验教训。
English
Offering a promising solution to the scalability challenges associated with human evaluation, the LLM-as-a-judge paradigm is rapidly gaining traction as an approach to evaluating large language models (LLMs). However, there are still many open questions about the strengths and weaknesses of this paradigm, and what potential biases it may hold. In this paper, we present a comprehensive study of the performance of various LLMs acting as judges. We leverage TriviaQA as a benchmark for assessing objective knowledge reasoning of LLMs and evaluate them alongside human annotations which we found to have a high inter-annotator agreement. Our study includes 9 judge models and 9 exam taker models -- both base and instruction-tuned. We assess the judge model's alignment across different model sizes, families, and judge prompts. Among other results, our research rediscovers the importance of using Cohen's kappa as a metric of alignment as opposed to simple percent agreement, showing that judges with high percent agreement can still assign vastly different scores. We find that both Llama-3 70B and GPT-4 Turbo have an excellent alignment with humans, but in terms of ranking exam taker models, they are outperformed by both JudgeLM-7B and the lexical judge Contains, which have up to 34 points lower human alignment. Through error analysis and various other studies, including the effects of instruction length and leniency bias, we hope to provide valuable lessons for using LLMs as judges in the future.

Summary

AI-Generated Summary

PDF385November 29, 2024