一个大型循环动作模型:xLSTM 实现了机器人任务的快速推理。
A Large Recurrent Action Model: xLSTM enables Fast Inference for Robotics Tasks
October 29, 2024
作者: Thomas Schmied, Thomas Adler, Vihang Patil, Maximilian Beck, Korbinian Pöppel, Johannes Brandstetter, Günter Klambauer, Razvan Pascanu, Sepp Hochreiter
cs.AI
摘要
近年来,强化学习(RL)领域出现了一种趋势,即通过序列建模在大规模数据集上离线训练大型动作模型。现有模型主要基于Transformer架构,这导致了强大的智能体。然而,由于Transformer为基础的方法推理时间较慢,因此在实时应用(如机器人技术)中并不实用。最近,提出了现代循环架构,如xLSTM和Mamba,这些架构在训练过程中表现出类似Transformer架构的并行化优势,同时提供快速推理能力。在这项工作中,我们研究了这些现代循环架构在大型动作模型中的适用性。因此,我们提出了一个带有xLSTM核心的大型循环动作模型(LRAM),具有线性时间推理复杂度和自然序列长度外推能力。对来自6个领域的432项任务进行的实验表明,LRAM在性能和速度方面与Transformer相比具有明显优势。
English
In recent years, there has been a trend in the field of Reinforcement
Learning (RL) towards large action models trained offline on large-scale
datasets via sequence modeling. Existing models are primarily based on the
Transformer architecture, which result in powerful agents. However, due to slow
inference times, Transformer-based approaches are impractical for real-time
applications, such as robotics. Recently, modern recurrent architectures, such
as xLSTM and Mamba, have been proposed that exhibit parallelization benefits
during training similar to the Transformer architecture while offering fast
inference. In this work, we study the aptitude of these modern recurrent
architectures for large action models. Consequently, we propose a Large
Recurrent Action Model (LRAM) with an xLSTM at its core that comes with
linear-time inference complexity and natural sequence length extrapolation
abilities. Experiments on 432 tasks from 6 domains show that LRAM compares
favorably to Transformers in terms of performance and speed.Summary
AI-Generated Summary