ChatPaper.aiChatPaper

改進 Transformer 世界模型以提高資料效率的強化學習

Improving Transformer World Models for Data-Efficient RL

February 3, 2025
作者: Antoine Dedieu, Joseph Ortiz, Xinghua Lou, Carter Wendelken, Wolfgang Lehrach, J Swaroop Guntupalli, Miguel Lazaro-Gredilla, Kevin Patrick Murphy
cs.AI

摘要

我們提出了一種基於模型的強化學習方法,該方法在具有挑戰性的Craftax-classic基準測試中實現了新的最先進表現。這是一款開放世界的2D生存遊戲,需要代理展示各種通用能力,如強大的泛化能力、深度探索和長期推理。通過一系列旨在提高樣本效率的精心設計選擇,我們的基於模型的強化學習算法在僅進行100萬環境步驟後就實現了67.4%的獎勵,顯著優於達到53.2%的DreamerV3,並且首次超過了人類的65.0%表現。我們的方法首先通過構建一個最先進的無模型基準線開始,使用結合了CNN和RNN的新型策略架構。然後,我們對標準基於模型的強化學習設置進行了三項改進:(a)“Dyna with warmup”,該方法在真實和虛擬數據上訓練策略,(b)圖像塊上的“最近鄰分詞器”,改進了創建變壓器世界模型(TWM)輸入的方案,以及(c)“區塊教師強迫”,該方法允許TWM聯合推理下一時間步的未來標記。
English
We present an approach to model-based RL that achieves a new state of the art performance on the challenging Craftax-classic benchmark, an open-world 2D survival game that requires agents to exhibit a wide range of general abilities -- such as strong generalization, deep exploration, and long-term reasoning. With a series of careful design choices aimed at improving sample efficiency, our MBRL algorithm achieves a reward of 67.4% after only 1M environment steps, significantly outperforming DreamerV3, which achieves 53.2%, and, for the first time, exceeds human performance of 65.0%. Our method starts by constructing a SOTA model-free baseline, using a novel policy architecture that combines CNNs and RNNs. We then add three improvements to the standard MBRL setup: (a) "Dyna with warmup", which trains the policy on real and imaginary data, (b) "nearest neighbor tokenizer" on image patches, which improves the scheme to create the transformer world model (TWM) inputs, and (c) "block teacher forcing", which allows the TWM to reason jointly about the future tokens of the next timestep.

Summary

AI-Generated Summary

PDF92February 4, 2025