一個大型循環動作模型:xLSTM 實現了機器人任務的快速推論
A Large Recurrent Action Model: xLSTM enables Fast Inference for Robotics Tasks
October 29, 2024
作者: Thomas Schmied, Thomas Adler, Vihang Patil, Maximilian Beck, Korbinian Pöppel, Johannes Brandstetter, Günter Klambauer, Razvan Pascanu, Sepp Hochreiter
cs.AI
摘要
近年來,在強化學習(RL)領域中出現了一種趨勢,即通過序列建模在大規模數據集上離線訓練大型動作模型。現有模型主要基於Transformer架構,這導致了強大的智能體。然而,由於Transformer-based方法推理時間較慢,因此對於機器人等實時應用來說是不切實際的。最近,提出了現代循環架構,如xLSTM和Mamba,這些架構在訓練期間展現了與Transformer架構類似的並行化優勢,同時提供了快速的推理能力。在這項工作中,我們研究了這些現代循環架構在大型動作模型中的適用性。因此,我們提出了一個帶有xLSTM核心的大型循環動作模型(LRAM),具有線性時間推理複雜度和自然序列長度外推能力。對來自6個領域的432個任務進行的實驗表明,LRAM在性能和速度方面與Transformer相比具有明顯優勢。
English
In recent years, there has been a trend in the field of Reinforcement
Learning (RL) towards large action models trained offline on large-scale
datasets via sequence modeling. Existing models are primarily based on the
Transformer architecture, which result in powerful agents. However, due to slow
inference times, Transformer-based approaches are impractical for real-time
applications, such as robotics. Recently, modern recurrent architectures, such
as xLSTM and Mamba, have been proposed that exhibit parallelization benefits
during training similar to the Transformer architecture while offering fast
inference. In this work, we study the aptitude of these modern recurrent
architectures for large action models. Consequently, we propose a Large
Recurrent Action Model (LRAM) with an xLSTM at its core that comes with
linear-time inference complexity and natural sequence length extrapolation
abilities. Experiments on 432 tasks from 6 domains show that LRAM compares
favorably to Transformers in terms of performance and speed.Summary
AI-Generated Summary