SafeDiffuser:使用擴散概率模型進行安全規劃
SafeDiffuser: Safe Planning with Diffusion Probabilistic Models
May 31, 2023
作者: Wei Xiao, Tsun-Hsuan Wang, Chuang Gan, Daniela Rus
cs.AI
摘要
基於擴散模型的方法在資料驅動規劃中顯示出潛力,但缺乏安全保證,因此難以應用於安全關鍵應用。為應對這些挑戰,我們提出了一種新方法,稱為SafeDiffuser,通過使用一類控制屏障函數確保擴散概率模型滿足規範。我們方法的關鍵思想是將所提出的有限時間擴散不變性嵌入到去噪擴散程序中,從而實現可信賴的擴散數據生成。此外,我們證明通過生成模型的有限時間擴散不變性方法不僅保持泛化性能,還在安全數據生成中創造了魯棒性。我們在一系列安全規劃任務上測試我們的方法,包括迷宮路徑生成、四足機器人運動和三維空間操作,結果顯示了相對於基本擴散模型的魯棒性和保證的優勢。
English
Diffusion model-based approaches have shown promise in data-driven planning,
but there are no safety guarantees, thus making it hard to be applied for
safety-critical applications. To address these challenges, we propose a new
method, called SafeDiffuser, to ensure diffusion probabilistic models satisfy
specifications by using a class of control barrier functions. The key idea of
our approach is to embed the proposed finite-time diffusion invariance into the
denoising diffusion procedure, which enables trustworthy diffusion data
generation. Moreover, we demonstrate that our finite-time diffusion invariance
method through generative models not only maintains generalization performance
but also creates robustness in safe data generation. We test our method on a
series of safe planning tasks, including maze path generation, legged robot
locomotion, and 3D space manipulation, with results showing the advantages of
robustness and guarantees over vanilla diffusion models.