安全扩散器:使用扩散概率模型进行安全规划
SafeDiffuser: Safe Planning with Diffusion Probabilistic Models
May 31, 2023
作者: Wei Xiao, Tsun-Hsuan Wang, Chuang Gan, Daniela Rus
cs.AI
摘要
基于扩散模型的方法在数据驱动规划中显示出潜力,但缺乏安全保证,因此难以应用于安全关键应用。为解决这些挑战,我们提出了一种名为SafeDiffuser的新方法,通过使用一类控制屏障函数来确保扩散概率模型满足规范。我们方法的关键思想是将所提出的有限时间扩散不变性嵌入到去噪扩散过程中,从而实现可信赖的扩散数据生成。此外,我们证明了通过生成模型实现的有限时间扩散不变性方法不仅保持了泛化性能,还在安全数据生成中创造了鲁棒性。我们在一系列安全规划任务上测试了我们的方法,包括迷宫路径生成、四足机器人运动和三维空间操作,结果显示了相对于普通扩散模型的鲁棒性和保证的优势。
English
Diffusion model-based approaches have shown promise in data-driven planning,
but there are no safety guarantees, thus making it hard to be applied for
safety-critical applications. To address these challenges, we propose a new
method, called SafeDiffuser, to ensure diffusion probabilistic models satisfy
specifications by using a class of control barrier functions. The key idea of
our approach is to embed the proposed finite-time diffusion invariance into the
denoising diffusion procedure, which enables trustworthy diffusion data
generation. Moreover, we demonstrate that our finite-time diffusion invariance
method through generative models not only maintains generalization performance
but also creates robustness in safe data generation. We test our method on a
series of safe planning tasks, including maze path generation, legged robot
locomotion, and 3D space manipulation, with results showing the advantages of
robustness and guarantees over vanilla diffusion models.