SPICE:语料环境中的自我对弈提升推理能力
SPICE: Self-Play In Corpus Environments Improves Reasoning
October 28, 2025
作者: Bo Liu, Chuanyang Jin, Seungone Kim, Weizhe Yuan, Wenting Zhao, Ilia Kulikov, Xian Li, Sainbayar Sukhbaatar, Jack Lanchantin, Jason Weston
cs.AI
摘要
自我改进系统需要通过环境交互实现持续适应。我们提出SPICE(语料库环境中的自我博弈)——一种强化学习框架,其单一模型扮演双重角色:作为挑战者从大型语料库中挖掘文档以生成多样化推理任务,同时作为推理者解决这些任务。通过对抗性动态,挑战者在推理者能力边界上创建自动课程,而语料库根基则为持续改进提供了丰富且近乎取之不尽的外部信号。相较于现有根基薄弱的自我博弈方法收效有限的情况,SPICE在多个模型系列的数学推理(+8.9%)和通用推理(+9.8%)基准测试中均实现稳定提升。我们的分析揭示了文档根基如何作为SPICE持续生成日益复杂的自定目标并实现目标的关键要素,从而达成持续自我改进。
English
Self-improving systems require environmental interaction for continuous
adaptation. We introduce SPICE (Self-Play In Corpus Environments), a
reinforcement learning framework where a single model acts in two roles: a
Challenger that mines documents from a large corpus to generate diverse
reasoning tasks, and a Reasoner that solves them. Through adversarial dynamics,
the Challenger creates an automatic curriculum at the frontier of the
Reasoner's capability, while corpus grounding provides the rich,
near-inexhaustible external signal necessary for sustained improvement. Unlike
existing ungrounded self-play methods that offer more limited benefits, SPICE
achieves consistent gains across mathematical (+8.9%) and general reasoning
(+9.8%) benchmarks on multiple model families. Our analysis reveals how
document grounding is a key ingredient in SPICE to continuously generate its
own increasingly challenging goals and achieve them, enabling sustained
self-improvement.