從眾包數據到高質量基準:Arena-Hard 和 BenchBuilder 流程
From Crowdsourced Data to High-Quality Benchmarks: Arena-Hard and BenchBuilder Pipeline
June 17, 2024
作者: Tianle Li, Wei-Lin Chiang, Evan Frick, Lisa Dunlap, Tianhao Wu, Banghua Zhu, Joseph E. Gonzalez, Ion Stoica
cs.AI
摘要
語言模型的快速演進促使更具挑戰性的基準的發展。目前的靜態基準常常難以一貫地區分不同模型的能力,並且無法與真實世界使用者偏好保持一致。另一方面,像是Chatbot Arena這樣的即時眾包平台收集了各種自然提示和使用者反饋。然而,這些提示在複雜程度上有所不同,而反饋無法離線應用於新模型。為了確保基準能跟上大型語言模型發展的步伐,我們討論了如何評估基準在自信地區分模型和與人類偏好的一致性方面的能力。在這些原則下,我們開發了BenchBuilder,一個動態基準,從即時數據來源中篩選高質量提示,以便對新的具挑戰性提示進行離線評估。BenchBuilder識別了高質量提示的七個指標,例如對領域知識的要求,並利用語言模型標註器從各種主題集群中選擇高質量提示的子集。語言模型評估過程利用語言模型評審員確保完全自動化、高質量且不斷更新的基準。我們將BenchBuilder應用於Chatbot Arena的提示,創建了Arena-Hard-Auto v0.1:來自各種任務的500個具挑戰性使用者提示。Arena-Hard-Auto v0.1提供比MT-Bench更緊密的3倍置信區間,並以僅25美元的成本且無需人工標記者,實現了與人類偏好排名達到89.1%的最新成果。BenchBuilder流程增強了評估基準並為開發者提供了一個寶貴的工具,使他們能夠從龐大數據中提取高質量基準而付出最小努力。
English
The rapid evolution of language models has necessitated the development of
more challenging benchmarks. Current static benchmarks often struggle to
consistently distinguish between the capabilities of different models and fail
to align with real-world user preferences. On the other hand, live
crowd-sourced platforms like the Chatbot Arena collect a wide range of natural
prompts and user feedback. However, these prompts vary in sophistication and
the feedback cannot be applied offline to new models. In order to ensure that
benchmarks keep up with the pace of LLM development, we address how one can
evaluate benchmarks on their ability to confidently separate models and their
alignment with human preference. Under these principles, we developed
BenchBuilder, a living benchmark that filters high-quality prompts from live
data sources to enable offline evaluation on fresh, challenging prompts.
BenchBuilder identifies seven indicators of a high-quality prompt, such as the
requirement for domain knowledge, and utilizes an LLM annotator to select a
high-quality subset of prompts from various topic clusters. The LLM evaluation
process employs an LLM judge to ensure a fully automated, high-quality, and
constantly updating benchmark. We apply BenchBuilder on prompts from the
Chatbot Arena to create Arena-Hard-Auto v0.1: 500 challenging user prompts from
a wide range of tasks. Arena-Hard-Auto v0.1 offers 3x tighter confidence
intervals than MT-Bench and achieves a state-of-the-art 89.1% agreement with
human preference rankings, all at a cost of only $25 and without human
labelers. The BenchBuilder pipeline enhances evaluation benchmarks and provides
a valuable tool for developers, enabling them to extract high-quality
benchmarks from extensive data with minimal effort.Summary
AI-Generated Summary