基于信息量的引导选择:面向执行导向代码生成的计算最优测试时策略
Surprisal-Guided Selection: Compute-Optimal Test-Time Strategies for Execution-Grounded Code Generation
February 7, 2026
作者: Jarrod Barnes
cs.AI
摘要
测试时训练(TTT)通过基于梯度的推理时更新来调整语言模型。但适应策略是否恰当?我们研究了可验证执行驱动(VEG)任务的计算最优测试策略,这类领域(如GPU内核优化)具有确定性评估器提供的密集连续奖励信号。以KernelBench为测试平台并采用1200亿参数模型(经LoRA适配的GPT-OSS-120B),我们发现搜索策略优于最小化适应(1-5个梯度步):在完整KernelBench L1评估集上,当K=64时N选最优采样达成90%任务成功率(18/20任务),而TTT最佳检查点仅达30.6%(三种子均值),其"等效K值"低于1,逊于单样本推理。失败模式源于过度锐化:梯度更新使多样性坍缩至平庸解而非发现最优解。我们的核心贡献是惊异值引导选择:选取最高惊异值(最低置信度)的正确样本可实现80%成功率,相较最置信选择的50%提升30%。扩展至惊异值引导前三选可匹配100%的预言机性能。这种零成本策略经长度控制分析验证,能复现预言机性能。对于密集奖励的VEG任务,计算资源应分配给样本多样性和智能选择而非梯度适应。惊异值引导选择原则或可推广至其他最优解位于分布尾部的执行驱动领域。
English
Test-time training (TTT) adapts language models through gradient-based updates at inference. But is adaptation the right strategy? We study compute-optimal test-time strategies for verifiable execution-grounded (VEG) tasks, domains like GPU kernel optimization where a deterministic evaluator provides dense, continuous reward signals. Using KernelBench as our testbed and a 120B-parameter model (GPT-OSS-120B with LoRA adaptation), we find that search outperforms minimal adaptation (1-5 gradient steps): Best-of-N sampling achieves 90% task success (18/20 tasks) at K=64 across the full KernelBench L1 eval set while TTT's best checkpoint reaches only 30.6% (3-seed mean), with TTT's "equivalent K" falling below 1, worse than single-sample inference. The failure mode is over-sharpening: gradient updates collapse diversity toward mediocre solutions rather than discovering optimal ones. Our main contribution is surprisal-guided selection: selecting the highest-surprisal (lowest-confidence) correct sample yields 80% success vs. 50% for most-confident selection, a 30% improvement. Extending to surprisal-guided-top3 matches oracle performance at 100%. This zero-cost strategy, validated through length-controlled analysis, recovers oracle performance. For dense-reward VEG tasks, compute should be allocated to sample diversity and intelligent selection rather than gradient adaptation. The surprisal-guided selection principle may generalize to other execution-grounded domains where optimal solutions occupy the distribution tail.