大規模演化策略:超越強化學習的大型語言模型微調
Evolution Strategies at Scale: LLM Fine-Tuning Beyond Reinforcement Learning
September 29, 2025
作者: Xin Qiu, Yulu Gan, Conor F. Hayes, Qiyao Liang, Elliot Meyerson, Babak Hodjat, Risto Miikkulainen
cs.AI
摘要
針對下游任務微調預訓練的大型語言模型(LLMs)是人工智慧部署流程中的關鍵步驟。強化學習(RL)無疑是最為突出的微調方法,促成了許多頂尖LLMs的誕生。相比之下,進化策略(ES)曾在數百萬參數的模型上展現出與RL相媲美的性能,但由於人們對其在大規模模型上可擴展性的悲觀看法,ES逐漸被忽視。在本研究中,我們首次成功實現了將ES擴展至微調LLMs全部參數的嘗試,揭示了ES能夠在數十億參數的搜索空間中高效運作,並在多方面超越現有的RL微調方法,包括樣本效率、對長遠獎勵的耐受性、對不同基礎LLMs的魯棒性、較低的獎勵欺詐傾向,以及跨運行次數的穩定性。因此,本研究為超越當前RL技術的LLM微調開闢了新的研究方向。源代碼已提供於:https://github.com/VsonicV/es-fine-tuning-paper。
English
Fine-tuning pre-trained large language models (LLMs) for down-stream tasks is
a critical step in the AI deployment pipeline. Reinforcement learning (RL) is
arguably the most prominent fine-tuning method, contributing to the birth of
many state-of-the-art LLMs. In contrast, evolution strategies (ES), which once
showed comparable performance to RL on models with a few million parameters,
was neglected due to the pessimistic perception of its scalability to larger
models. In this work, we report the first successful attempt to scale up ES for
fine-tuning the full parameters of LLMs, showing the surprising fact that ES
can search efficiently over billions of parameters and outperform existing RL
fine-tuning methods in multiple respects, including sample efficiency,
tolerance to long-horizon rewards, robustness to different base LLMs, less
tendency to reward hacking, and more stable performance across runs. It
therefore serves as a basis to unlock a new direction in LLM fine-tuning beyond
what current RL techniques provide. The source codes are provided at:
https://github.com/VsonicV/es-fine-tuning-paper.