LiveBench:一個具挑戰性且無污染的LLM基準測試
LiveBench: A Challenging, Contamination-Free LLM Benchmark
June 27, 2024
作者: Colin White, Samuel Dooley, Manley Roberts, Arka Pal, Ben Feuer, Siddhartha Jain, Ravid Shwartz-Ziv, Neel Jain, Khalid Saifullah, Siddartha Naidu, Chinmay Hegde, Yann LeCun, Tom Goldstein, Willie Neiswanger, Micah Goldblum
cs.AI
摘要
測試集污染是一個廣為人知的問題,指的是測試數據從基準測試中流入新模型的訓練集,這對於公正評估語言模型(LLM)構成障礙,並可能迅速使基準測試過時。為了減輕這個問題,許多最近的基準測試從人類或LLM評審中集體獲取新提示和評估;然而,這些方法可能引入顯著偏見,並在評分困難問題時失效。在本研究中,我們介紹了一個新的LLM基準測試,旨在兼顧測試集污染和LLM評分以及人類集體智慧的缺陷。我們推出了LiveBench,這是第一個基準測試,具有以下特點:(1)包含最新信息來源中經常更新的問題,(2)根據客觀的真實值自動評分答案,以及(3)包含各種具有挑戰性的任務,涵蓋數學、編碼、推理、語言、指令遵循和數據分析。為實現這一目標,LiveBench包含基於最近發布的數學競賽、arXiv論文、新聞文章和數據集的問題,並包含來自先前基準測試(如Big-Bench Hard、AMPS和IFEval)的更難、無污染版本的任務。我們評估了許多知名的封閉源代碼模型,以及從0.5B到110B不等的數十個開源模型。LiveBench非常困難,頂尖模型的準確率低於65%。我們公開所有問題、代碼和模型答案。問題將每月添加和更新,我們將隨時間推出新任務和更難的任務版本,以便LiveBench可以區分LLM在未來改進時的能力。我們歡迎社區參與和合作擴展基準測試任務和模型。
English
Test set contamination, wherein test data from a benchmark ends up in a newer
model's training set, is a well-documented obstacle for fair LLM evaluation and
can quickly render benchmarks obsolete. To mitigate this, many recent
benchmarks crowdsource new prompts and evaluations from human or LLM judges;
however, these can introduce significant biases, and break down when scoring
hard questions. In this work, we introduce a new benchmark for LLMs designed to
be immune to both test set contamination and the pitfalls of LLM judging and
human crowdsourcing. We release LiveBench, the first benchmark that (1)
contains frequently-updated questions from recent information sources, (2)
scores answers automatically according to objective ground-truth values, and
(3) contains a wide variety of challenging tasks, spanning math, coding,
reasoning, language, instruction following, and data analysis. To achieve this,
LiveBench contains questions that are based on recently-released math
competitions, arXiv papers, news articles, and datasets, and it contains
harder, contamination-free versions of tasks from previous benchmarks such as
Big-Bench Hard, AMPS, and IFEval. We evaluate many prominent closed-source
models, as well as dozens of open-source models ranging from 0.5B to 110B in
size. LiveBench is difficult, with top models achieving below 65% accuracy. We
release all questions, code, and model answers. Questions will be added and
updated on a monthly basis, and we will release new tasks and harder versions
of tasks over time so that LiveBench can distinguish between the capabilities
of LLMs as they improve in the future. We welcome community engagement and
collaboration for expanding the benchmark tasks and models.Summary
AI-Generated Summary