NeuGrasp:基於背景先驗的通用神經表面重建技術,實現材料無關的物體抓取檢測
NeuGrasp: Generalizable Neural Surface Reconstruction with Background Priors for Material-Agnostic Object Grasp Detection
March 5, 2025
作者: Qingyu Fan, Yinghao Cai, Chao Li, Wenzhe He, Xudong Zheng, Tao Lu, Bin Liang, Shuo Wang
cs.AI
摘要
在包含透明和鏡面物體的場景中,機器人抓取對於依賴精確深度信息的方法提出了巨大挑戰。本文介紹了NeuGrasp,這是一種利用背景先驗進行材料無關抓取檢測的神經表面重建方法。NeuGrasp整合了變壓器和全局先驗體積,通過空間編碼聚合多視圖特徵,從而在狹窄和稀疏的觀測條件下實現穩健的表面重建。通過殘差特徵增強聚焦前景物體,並利用佔據先驗體積精細化空間感知,NeuGrasp在處理具有透明和鏡面表面的物體方面表現卓越。在模擬和真實場景中的大量實驗表明,NeuGrasp在抓取性能上超越了現有最先進的方法,同時保持了可媲美的重建質量。更多詳情請訪問https://neugrasp.github.io/。
English
Robotic grasping in scenes with transparent and specular objects presents
great challenges for methods relying on accurate depth information. In this
paper, we introduce NeuGrasp, a neural surface reconstruction method that
leverages background priors for material-agnostic grasp detection. NeuGrasp
integrates transformers and global prior volumes to aggregate multi-view
features with spatial encoding, enabling robust surface reconstruction in
narrow and sparse viewing conditions. By focusing on foreground objects through
residual feature enhancement and refining spatial perception with an
occupancy-prior volume, NeuGrasp excels in handling objects with transparent
and specular surfaces. Extensive experiments in both simulated and real-world
scenarios show that NeuGrasp outperforms state-of-the-art methods in grasping
while maintaining comparable reconstruction quality. More details are available
at https://neugrasp.github.io/.Summary
AI-Generated Summary