SWE-Lancer:前沿LLM是否能從真實世界的自由軟體工程中賺取100萬美元?
SWE-Lancer: Can Frontier LLMs Earn $1 Million from Real-World Freelance Software Engineering?
February 17, 2025
作者: Samuel Miserendino, Michele Wang, Tejal Patwardhan, Johannes Heidecke
cs.AI
摘要
我們介紹了 SWE-Lancer,這是一個包含超過 1,400 個來自 Upwork 的自由軟體工程任務的基準,總價值超過 1 百萬美元的實際支付。SWE-Lancer 包括獨立的工程任務,範圍從 50 個錯誤修復到 32,000 美元的功能實現,以及管理任務,其中模型在技術實現提案之間進行選擇。獨立任務通過經驗豐富的軟體工程師三重驗證的端對端測試進行評分,而管理決策則根據最初聘用的工程經理的選擇進行評估。我們評估模型的表現並發現,前沿模型仍無法解決大多數任務。為了促進未來研究,我們開源了一個統一的 Docker 映像檔和一個公共評估分割,SWE-Lancer Diamond(https://github.com/openai/SWELancer-Benchmark)。通過將模型表現映射到金錢價值,我們希望 SWE-Lancer 能夠促進對 AI 模型開發經濟影響的更深入研究。
English
We introduce SWE-Lancer, a benchmark of over 1,400 freelance software
engineering tasks from Upwork, valued at \1 million USD total in real-world
payouts. SWE-Lancer encompasses both independent engineering tasks--ranging
from 50 bug fixes to \$32,000 feature implementations--and managerial tasks,
where models choose between technical implementation proposals. Independent
tasks are graded with end-to-end tests triple-verified by experienced software
engineers, while managerial decisions are assessed against the choices of the
original hired engineering managers. We evaluate model performance and find
that frontier models are still unable to solve the majority of tasks. To
facilitate future research, we open-source a unified Docker image and a public
evaluation split, SWE-Lancer Diamond
(https://github.com/openai/SWELancer-Benchmark). By mapping model performance
to monetary value, we hope SWE-Lancer enables greater research into the
economic impact of AI model development.Summary
AI-Generated Summary