單次熵最小化
One-shot Entropy Minimization
May 26, 2025
作者: Zitian Gao, Lynx Chen, Joey Zhou, Bryan Dai
cs.AI
摘要
我們訓練了13,440個大型語言模型,發現熵最小化僅需單一未標記數據和10步優化,就能達到與基於規則的強化學習中使用數千筆數據和精心設計獎勵所獲得的性能提升相當甚至更好的效果。這一驚人的發現可能促使我們重新思考大型語言模型的後訓練範式。我們的程式碼可在https://github.com/zitian-gao/one-shot-em取得。
English
We trained 13,440 large language models and found that entropy minimization
requires only a single unlabeled data and 10 steps optimization to achieve
performance improvements comparable to or even greater than those obtained
using thousands of data and carefully designed rewards in rule-based
reinforcement learning. This striking result may prompt a rethinking of
post-training paradigms for large language models. Our code is avaliable at
https://github.com/zitian-gao/one-shot-em.Summary
AI-Generated Summary