TacSL:一個用於視覺觸覺感測器模擬與學習的程式庫
TacSL: A Library for Visuotactile Sensor Simulation and Learning
August 12, 2024
作者: Iretiayo Akinola, Jie Xu, Jan Carius, Dieter Fox, Yashraj Narang
cs.AI
摘要
對於人類和機器人來說,觸覺感知,即觸覺感測,在執行接觸豐富的操作任務中至關重要。在機器人觸覺感測中的三個關鍵挑戰是:1) 解釋感測器信號,2) 在新情境中生成感測器信號,以及3) 學習基於感測器的策略。對於視覺觸覺感測器,解釋方面已經得到促進,因為它們與視覺感測器(例如RGB攝像頭)之間有著密切的關係。然而,生成仍然困難,因為視覺觸覺感測器通常涉及接觸、變形、照明和成像,這些都很昂貴且難以模擬;反過來,策略學習一直是具有挑戰性的,因為無法利用模擬進行大規模數據收集。我們提出了TacSL(taxel),這是一個基於GPU的視覺觸覺感測器模擬和學習庫。TacSL可用於模擬視覺觸覺圖像,並在廣泛使用的Isaac Gym模擬器中比先前最先進的方法快200倍提取接觸力分佈。此外,TacSL提供了一個學習工具包,其中包含多個感測器模型、接觸密集型訓練環境以及可以促進模擬到真實應用的在線/離線算法。在算法方面,我們介紹了一種新穎的在線強化學習算法,稱為非對稱演員-評論家蒸餾(\sysName),旨在有效且高效地在模擬中學習基於觸覺的策略,並能夠轉移到現實世界。最後,我們通過評估蒸餾和多模態感知對接觸豐富操作任務的好處,以及最為關鍵的進行模擬到真實的轉移,展示了我們庫和算法的效用。補充視頻和結果可在https://iakinola23.github.io/tacsl/找到。
English
For both humans and robots, the sense of touch, known as tactile sensing, is
critical for performing contact-rich manipulation tasks. Three key challenges
in robotic tactile sensing are 1) interpreting sensor signals, 2) generating
sensor signals in novel scenarios, and 3) learning sensor-based policies. For
visuotactile sensors, interpretation has been facilitated by their close
relationship with vision sensors (e.g., RGB cameras). However, generation is
still difficult, as visuotactile sensors typically involve contact,
deformation, illumination, and imaging, all of which are expensive to simulate;
in turn, policy learning has been challenging, as simulation cannot be
leveraged for large-scale data collection. We present TacSL
(taxel), a library for GPU-based visuotactile sensor simulation and
learning. TacSL can be used to simulate visuotactile images and
extract contact-force distributions over 200times faster than the prior
state-of-the-art, all within the widely-used Isaac Gym simulator. Furthermore,
TacSL provides a learning toolkit containing multiple sensor models,
contact-intensive training environments, and online/offline algorithms that can
facilitate policy learning for sim-to-real applications. On the algorithmic
side, we introduce a novel online reinforcement-learning algorithm called
asymmetric actor-critic distillation (\sysName), designed to effectively and
efficiently learn tactile-based policies in simulation that can transfer to the
real world. Finally, we demonstrate the utility of our library and algorithms
by evaluating the benefits of distillation and multimodal sensing for
contact-rich manip ulation tasks, and most critically, performing sim-to-real
transfer. Supplementary videos and results are at
https://iakinola23.github.io/tacsl/.Summary
AI-Generated Summary