SE-Bench:基于知识内化的自我进化基准测试框架
SE-Bench: Benchmarking Self-Evolution with Knowledge Internalization
February 4, 2026
作者: Jiarui Yuan, Tailin Jin, Weize Chen, Zeyuan Liu, Zhiyuan Liu, Maosong Sun
cs.AI
摘要
真正的自我进化要求智能体作为终身学习者,通过内化新经验来解决未来问题。然而,这一基础能力的精确衡量面临两大障碍:先验知识纠缠(即"新"知识可能已存在于预训练数据中)与推理复杂度纠缠(即失败可能源于问题难度而非知识回忆能力不足)。我们推出SE-Bench诊断环境,通过将NumPy库及其API文档混淆为随机命名的伪新包,构建出纯净测试场景:智能体需内化该包知识,并在无文档条件下完成简单编程任务——这些任务使用新API文档时极为简单,但基础模型若无此知识则无法解决。
我们的研究揭示三大发现:(1)开卷悖论:使用参考文档的训练会抑制知识留存,必须采用"闭卷训练"强制知识压缩至权重;(2)强化学习鸿沟:标准RL因PPO裁剪和负梯度无法完全内化新知识;(3)自我博弈的可行性:模型结合SFT后能从自生成的噪声任务中学习,但RL方法无效。总体而言,SE-Bench构建了面向知识内化自我进化的严谨诊断平台。代码与数据集详见https://github.com/thunlp/SE-Bench。
English
True self-evolution requires agents to act as lifelong learners that internalize novel experiences to solve future problems. However, rigorously measuring this foundational capability is hindered by two obstacles: the entanglement of prior knowledge, where ``new'' knowledge may appear in pre-training data, and the entanglement of reasoning complexity, where failures may stem from problem difficulty rather than an inability to recall learned knowledge. We introduce SE-Bench, a diagnostic environment that obfuscates the NumPy library and its API doc into a pseudo-novel package with randomized identifiers. Agents are trained to internalize this package and evaluated on simple coding tasks without access to documentation, yielding a clean setting where tasks are trivial with the new API doc but impossible for base models without it. Our investigation reveals three insights: (1) the Open-Book Paradox, where training with reference documentation inhibits retention, requiring "Closed-Book Training" to force knowledge compression into weights; (2) the RL Gap, where standard RL fails to internalize new knowledge completely due to PPO clipping and negative gradients; and (3) the viability of Self-Play for internalization, proving models can learn from self-generated, noisy tasks when coupled with SFT, but not RL. Overall, SE-Bench establishes a rigorous diagnostic platform for self-evolution with knowledge internalization. Our code and dataset can be found at https://github.com/thunlp/SE-Bench.