AsyncFlow:一個非同步串流強化學習框架,用於高效的大型語言模型後訓練
AsyncFlow: An Asynchronous Streaming RL Framework for Efficient LLM Post-Training
July 2, 2025
作者: Zhenyu Han, Ansheng You, Haibo Wang, Kui Luo, Guang Yang, Wenqi Shi, Menglong Chen, Sicheng Zhang, Zeshun Lan, Chunshi Deng, Huazhong Ji, Wenjie Liu, Yu Huang, Yixiang Zhang, Chenyi Pan, Jing Wang, Xin Huang, Chunsheng Li, Jianping Wu
cs.AI
摘要
強化學習(RL)已成為大型語言模型(LLM)後訓練階段的關鍵技術。傳統的任務共置RL框架存在顯著的可擴展性瓶頸,而任務分離的RL框架則面臨複雜數據流以及相應的資源閒置和工作負載不平衡的挑戰。此外,大多數現有框架與LLM訓練或推理引擎緊密耦合,難以支持自定義設計的引擎。為解決這些挑戰,我們提出了AsyncFlow,一種用於高效後訓練的異步流式RL框架。具體而言,我們引入了一個分佈式數據存儲和傳輸模塊,以完全流式的方式提供統一的數據管理和細粒度調度能力。這種架構本質上促進了RL任務之間的自動化管道重疊和動態負載平衡。此外,我們提出了一種基於生產者-消費者的異步工作流,通過在陳舊度閾值內策略性地延遲參數更新過程,最大限度地減少計算閒置。最後,AsyncFlow的核心能力在架構上與底層訓練和推理引擎解耦,並通過面向服務的用戶界面進行封裝,提供了模塊化和可定制的用戶體驗。大量實驗表明,與最先進的基線相比,平均吞吐量提高了1.59倍。本文提出的架構為下一代RL訓練系統設計提供了可操作的見解。
English
Reinforcement learning (RL) has become a pivotal technology in the
post-training phase of large language models (LLMs). Traditional task-colocated
RL frameworks suffer from significant scalability bottlenecks, while
task-separated RL frameworks face challenges in complex dataflows and the
corresponding resource idling and workload imbalance. Moreover, most existing
frameworks are tightly coupled with LLM training or inference engines, making
it difficult to support custom-designed engines. To address these challenges,
we propose AsyncFlow, an asynchronous streaming RL framework for efficient
post-training. Specifically, we introduce a distributed data storage and
transfer module that provides a unified data management and fine-grained
scheduling capability in a fully streamed manner. This architecture inherently
facilitates automated pipeline overlapping among RL tasks and dynamic load
balancing. Moreover, we propose a producer-consumer-based asynchronous workflow
engineered to minimize computational idleness by strategically deferring
parameter update process within staleness thresholds. Finally, the core
capability of AsynFlow is architecturally decoupled from underlying training
and inference engines and encapsulated by service-oriented user interfaces,
offering a modular and customizable user experience. Extensive experiments
demonstrate an average of 1.59 throughput improvement compared with
state-of-the-art baseline. The presented architecture in this work provides
actionable insights for next-generation RL training system designs.