模型能力主导:AIMO 3带来的推理时优化启示
Model Capability Dominates: Inference-Time Optimization Lessons from AIMO 3
April 16, 2026
作者: Natapong Nitarach
cs.AI
摘要
在数学推理任务中,对多个大语言模型尝试进行多数投票虽能提升效果,但误差相关性限制了有效样本量。一个自然的解决方案是为不同投票者分配不同的推理策略。我们通过Diverse Prompt Mixer方法在AIMO 3竞赛中验证:使用3个模型、23+组实验、50道IMO级别试题、单张H100 80GB显卡及5小时限时。所有提示层面的干预均告失败——高温采样已能有效降低误差相关性,而弱化策略对准确率的负面影响远超其降低相关性的作用。在同等N=8样本量及所有优化尝试中,模型能力差距达8个百分点的前提下,模型性能始终起主导作用。最佳多数投票得分(42/50)与pass@20(约45.5)之间的差距源于选择机制损失而非提示损失,基于验证器的选择器可弥补此差距,而提示工程无法实现。
English
Majority voting over multiple LLM attempts improves mathematical reasoning, but correlated errors limit the effective sample size. A natural fix is to assign different reasoning strategies to different voters. The approach, Diverse Prompt Mixer, is tested on the AIMO 3 competition: 3 models, 23+ experiments, 50 IMO-level problems, one H100 80 GB, 5-hour limit. Every prompt-level intervention fails. High-temperature sampling already decorrelates errors; weaker strategies reduce accuracy more than they reduce correlation. Across an 8-point capability gap at equal N=8 and every optimization tested, model capability dominates. The gap between the best majority-vote score (42/50) and pass@20 (~45.5) is selection loss, not prompt loss. A verifier-based selector could close it. Prompt engineering cannot.