S*:代碼生成中的測試時間縮放
S*: Test Time Scaling for Code Generation
February 20, 2025
作者: Dacheng Li, Shiyi Cao, Chengkun Cao, Xiuyu Li, Shangyin Tan, Kurt Keutzer, Jiarong Xing, Joseph E. Gonzalez, Ion Stoica
cs.AI
摘要
增加大型語言模型(LLMs)在測試時的計算資源,在各領域展現出潛力,但在代碼生成領域卻仍未被充分探索,儘管在數學領域已有廣泛研究。本文提出S*,首個混合型測試時擴展框架,顯著提升了生成代碼的覆蓋率和選擇準確性。S*在現有的平行擴展範式基礎上,引入序列擴展,以突破性能極限。此外,它利用一種新穎的選擇機制,自適應地生成用於成對比較的區分性輸入,並結合執行基礎信息,以穩健地識別正確解決方案。我們在12個大型語言模型和大型推理模型上進行評估,結果顯示:(1)S*持續提升不同模型家族和規模的性能,使一個3B模型超越GPT-4o-mini;(2)S*使非推理模型超越推理模型——配備S*的GPT-4o-mini在LiveCodeBench上比o1-preview高出3.7%;(3)S*進一步提升頂尖推理模型——配備S*的DeepSeek-R1-Distill-Qwen-32B在LiveCodeBench上達到85.7%,接近o1(高)的88.5%。代碼將於https://github.com/NovaSky-AI/SkyThought 提供。
English
Increasing test-time compute for LLMs shows promise across domains but
remains underexplored in code generation, despite extensive study in math. In
this paper, we propose S*, the first hybrid test-time scaling framework that
substantially improves the coverage and selection accuracy of generated code.
S* extends the existing parallel scaling paradigm with sequential scaling to
push performance boundaries. It further leverages a novel selection mechanism
that adaptively generates distinguishing inputs for pairwise comparison,
combined with execution-grounded information to robustly identify correct
solutions. We evaluate across 12 Large Language Models and Large Reasoning
Model and show: (1) S* consistently improves performance across model families
and sizes, enabling a 3B model to outperform GPT-4o-mini; (2) S* enables
non-reasoning models to surpass reasoning models - GPT-4o-mini with S*
outperforms o1-preview by 3.7% on LiveCodeBench; (3) S* further boosts
state-of-the-art reasoning models - DeepSeek-R1-Distill-Qwen-32B with S*
achieves 85.7% on LiveCodeBench, approaching o1 (high) at 88.5%. Code will be
available under https://github.com/NovaSky-AI/SkyThought.Summary
AI-Generated Summary