LiveBench:一个具有挑战性且无污染的LLM基准测试
LiveBench: A Challenging, Contamination-Free LLM Benchmark
June 27, 2024
作者: Colin White, Samuel Dooley, Manley Roberts, Arka Pal, Ben Feuer, Siddhartha Jain, Ravid Shwartz-Ziv, Neel Jain, Khalid Saifullah, Siddartha Naidu, Chinmay Hegde, Yann LeCun, Tom Goldstein, Willie Neiswanger, Micah Goldblum
cs.AI
摘要
测试集污染是一个被广泛记录的问题,指的是基准测试中的测试数据最终出现在新模型的训练集中,这对公平的LLM评估构成障碍,也可能迅速使基准测试变得过时。为了减轻这一问题,许多最近的基准测试通过众包方式从人类或LLM评委那里获取新的提示和评估;然而,这可能引入显著偏见,并且在评分困难问题时会出现问题。在这项工作中,我们介绍了一个新的LLM基准测试,旨在兼顾测试集污染和LLM评分以及人类众包的缺陷。我们发布了LiveBench,这是第一个基准测试,具有以下特点:(1) 包含来自最新信息源的经常更新的问题,(2) 根据客观的真值自动评分,(3) 包含各种具有挑战性的任务,涵盖数学、编码、推理、语言、遵循指示和数据分析。为实现这一目标,LiveBench的问题基于最近发布的数学竞赛、arXiv论文、新闻文章和数据集,其中包含了来自之前基准测试(如Big-Bench Hard、AMPS和IFEval)的更困难、无污染版本的任务。我们评估了许多知名的闭源模型,以及从0.5B到110B不等的数十个开源模型。LiveBench非常困难,顶尖模型的准确率低于65%。我们发布所有问题、代码和模型答案。问题将每月添加和更新,我们将随时间发布新任务和更难的任务版本,以便LiveBench可以区分LLM在未来改进时的能力。我们欢迎社区参与和合作,以扩展基准测试任务和模型。
English
Test set contamination, wherein test data from a benchmark ends up in a newer
model's training set, is a well-documented obstacle for fair LLM evaluation and
can quickly render benchmarks obsolete. To mitigate this, many recent
benchmarks crowdsource new prompts and evaluations from human or LLM judges;
however, these can introduce significant biases, and break down when scoring
hard questions. In this work, we introduce a new benchmark for LLMs designed to
be immune to both test set contamination and the pitfalls of LLM judging and
human crowdsourcing. We release LiveBench, the first benchmark that (1)
contains frequently-updated questions from recent information sources, (2)
scores answers automatically according to objective ground-truth values, and
(3) contains a wide variety of challenging tasks, spanning math, coding,
reasoning, language, instruction following, and data analysis. To achieve this,
LiveBench contains questions that are based on recently-released math
competitions, arXiv papers, news articles, and datasets, and it contains
harder, contamination-free versions of tasks from previous benchmarks such as
Big-Bench Hard, AMPS, and IFEval. We evaluate many prominent closed-source
models, as well as dozens of open-source models ranging from 0.5B to 110B in
size. LiveBench is difficult, with top models achieving below 65% accuracy. We
release all questions, code, and model answers. Questions will be added and
updated on a monthly basis, and we will release new tasks and harder versions
of tasks over time so that LiveBench can distinguish between the capabilities
of LLMs as they improve in the future. We welcome community engagement and
collaboration for expanding the benchmark tasks and models.Summary
AI-Generated Summary