HackerRank-ASTRA:評估大型語言模型在跨領域多文件項目問題上的正確性與一致性
HackerRank-ASTRA: Evaluating Correctness & Consistency of Large Language Models on cross-domain multi-file project problems
January 31, 2025
作者: Jun Xing, Mayur Bhatia, Sahil Phulwani, Darshan Suresh, Rafik Matta
cs.AI
摘要
評估大型語言模型(LLMs)在真實世界應用的可行性,對於它們在軟體開發任務中的發展和使用提供了寶貴的見解。現有的基準測試通常專注於獨立編碼問題或特定庫,忽略了多文件、基於項目的情境,並缺乏對一致性的嚴格評估。HackerRank-ASTRA基準測試引入了基於項目的編碼問題,反映了真實世界情境。它通過32次運行(k = 32)和中位數標準偏差來評估模型的一致性,同時結合了分類水準分析來評估子技能能力。對65個問題的初步評估顯示,排名前三位的模型 - o1、o1-preview和Claude-3.5-Sonnet-1022 - 實現了相當的平均分數為75%,在表現上沒有統計上顯著的差異。值得注意的是,Claude-3.5-Sonnet-1022在各問題間展現出最高的一致性,具有低變異性(SD = 0.0497),這在統計上與其他模型有顯著差異,突顯了它在真實世界軟體開發任務中的可靠性。
English
Evaluating the real-world applicability of large language models (LLMs)
provides valuable insights for their development and use in software
development tasks. Existing benchmarks often focus on standalone coding
problems or specific libraries, overlooking multi-file, project-based scenarios
and lacking a rigorous evaluation of consistency. The HackerRank-ASTRA
Benchmark introduces project-based coding problems that mirror real-world
scenarios. It evaluates model consistency through 32 runs (k = 32) and median
standard deviation while incorporating taxonomy-level analysis to assess
sub-skill capabilities. Initial evaluations on 65 problems show that the top
three models -- o1, o1-preview, and Claude-3.5-Sonnet-1022 -- achieved
comparable average scores of 75%, with no statistically significant differences
in performance. Notably, Claude-3.5-Sonnet-1022 demonstrated the highest
consistency across problems, with low variability (SD = 0.0497), which was
statistically significant compared to other models, highlighting its
reliability for real-world software development tasks.Summary
AI-Generated Summary