SERL:用于高效样本机器人强化学习的软件套件
SERL: A Software Suite for Sample-Efficient Robotic Reinforcement Learning
January 29, 2024
作者: Jianlan Luo, Zheyuan Hu, Charles Xu, You Liang Tan, Jacob Berg, Archit Sharma, Stefan Schaal, Chelsea Finn, Abhishek Gupta, Sergey Levine
cs.AI
摘要
近年来,机器人强化学习(RL)领域取得了显著进展,使得能够处理复杂图像观测、在真实世界中训练,并整合辅助数据(如演示和先前经验)的方法成为可能。然而,尽管取得这些进展,机器人RL仍然难以使用。从业者们普遍认为,这些算法的具体实现细节通常与算法选择一样重要(甚至更重要)以获得良好性能。我们认为机器人RL广泛应用和进一步发展的一个重要挑战是这些方法的相对难以获取性。为了解决这一挑战,我们开发了一个精心实现的库,其中包含一种样本高效的离策略深度RL方法,以及用于计算奖励和重置环境的方法,一个用于广泛采用的机器人的高质量控制器,以及一些具有挑战性的示例任务。我们将这个库提供给社区作为资源,描述其设计选择,并呈现实验结果。令人惊讶的是,我们发现我们的实现可以实现非常高效的学习,在平均每个策略训练25到50分钟的情况下,获得了PCB板组装、电缆布线和物体重新定位策略,相较于文献中类似任务的最新结果有所改进。这些策略实现了完美或接近完美的成功率,即使在扰动下也具有极高的稳健性,并表现出自发的恢复和校正行为。我们希望这些有希望的结果和我们高质量的开源实现将为机器人领域提供一个工具,促进机器人RL的进一步发展。我们的代码、文档和视频可以在https://serl-robot.github.io/找到。
English
In recent years, significant progress has been made in the field of robotic
reinforcement learning (RL), enabling methods that handle complex image
observations, train in the real world, and incorporate auxiliary data, such as
demonstrations and prior experience. However, despite these advances, robotic
RL remains hard to use. It is acknowledged among practitioners that the
particular implementation details of these algorithms are often just as
important (if not more so) for performance as the choice of algorithm. We posit
that a significant challenge to widespread adoption of robotic RL, as well as
further development of robotic RL methods, is the comparative inaccessibility
of such methods. To address this challenge, we developed a carefully
implemented library containing a sample efficient off-policy deep RL method,
together with methods for computing rewards and resetting the environment, a
high-quality controller for a widely-adopted robot, and a number of challenging
example tasks. We provide this library as a resource for the community,
describe its design choices, and present experimental results. Perhaps
surprisingly, we find that our implementation can achieve very efficient
learning, acquiring policies for PCB board assembly, cable routing, and object
relocation between 25 to 50 minutes of training per policy on average,
improving over state-of-the-art results reported for similar tasks in the
literature. These policies achieve perfect or near-perfect success rates,
extreme robustness even under perturbations, and exhibit emergent recovery and
correction behaviors. We hope that these promising results and our high-quality
open-source implementation will provide a tool for the robotics community to
facilitate further developments in robotic RL. Our code, documentation, and
videos can be found at https://serl-robot.github.io/