ChatPaper.aiChatPaper

使用預訓練的文本到圖像擴散模型進行點雲完成

Point-Cloud Completion with Pretrained Text-to-image Diffusion Models

June 18, 2023
作者: Yoni Kasten, Ohad Rahamim, Gal Chechik
cs.AI

摘要

在現實世界應用中收集的點雲數據通常是不完整的。 數據通常缺失是因為觀察到的物體來自部分視角,僅捕捉特定的透視或角度。此外,數據也可能因遮蔽和低分辨率採樣而不完整。現有的完成方法依賴於預定義對象的數據集,以指導嘈雜和不完整的點雲的完成。然而,這些方法在測試時對於在訓練數據集中缺乏充分代表性的Out-Of-Distribution(OOD)對象表現不佳。在這裡,我們利用了最近在文本引導圖像生成方面取得的重大突破,這些突破帶來了文本引導形狀生成方面的重大進展。我們描述了一種稱為SDS-Complete的方法,該方法使用預先訓練的文本到圖像擴散模型,並利用給定對象的不完整點雲的文本語義,以獲得完整的表面表示。SDS-Complete可以在測試時進行優化,完成各種對象,而無需昂貴地收集3D信息。我們在由現實世界深度感應器和LiDAR掃描儀捕獲的不完整掃描對象上評估了SDS Complete。我們發現,與當前方法相比,它有效地重建了常見數據集中缺失的對象,平均減少了50%的Chamfer損失。項目頁面:https://sds-complete.github.io/
English
Point-cloud data collected in real-world applications are often incomplete. Data is typically missing due to objects being observed from partial viewpoints, which only capture a specific perspective or angle. Additionally, data can be incomplete due to occlusion and low-resolution sampling. Existing completion approaches rely on datasets of predefined objects to guide the completion of noisy and incomplete, point clouds. However, these approaches perform poorly when tested on Out-Of-Distribution (OOD) objects, that are poorly represented in the training dataset. Here we leverage recent advances in text-guided image generation, which lead to major breakthroughs in text-guided shape generation. We describe an approach called SDS-Complete that uses a pre-trained text-to-image diffusion model and leverages the text semantics of a given incomplete point cloud of an object, to obtain a complete surface representation. SDS-Complete can complete a variety of objects using test-time optimization without expensive collection of 3D information. We evaluate SDS Complete on incomplete scanned objects, captured by real-world depth sensors and LiDAR scanners. We find that it effectively reconstructs objects that are absent from common datasets, reducing Chamfer loss by 50% on average compared with current methods. Project page: https://sds-complete.github.io/
PDF80December 15, 2024