ChatPaper.aiChatPaper

OpenShape:將3D形狀表示擴展至開放世界理解

OpenShape: Scaling Up 3D Shape Representation Towards Open-World Understanding

May 18, 2023
作者: Minghua Liu, Ruoxi Shi, Kaiming Kuang, Yinhao Zhu, Xuanlin Li, Shizhong Han, Hong Cai, Fatih Porikli, Hao Su
cs.AI

摘要

我們介紹了 OpenShape,一種用於學習文本、圖像和點雲的多模態聯合表示的方法。我們採用了常用的多模態對比學習框架進行表示對齊,但特別專注於擴展 3D 表示以實現開放世界的 3D 形狀理解。為了實現這一目標,我們通過合併多個 3D 數據集來擴大訓練數據,並提出了幾種自動過濾和豐富嘈雜文本描述的策略。我們還探索並比較了用於擴展 3D 主幹網絡的策略,並引入了一個新的硬負採樣模塊以實現更高效的訓練。我們在零樣本 3D 分類基準上評估了 OpenShape,展示了其在開放世界識別方面的卓越能力。具體而言,OpenShape 在包含 1,156 個類別的 Objaverse-LVIS 基準上實現了 46.8% 的零樣本準確率,而現有方法不到 10%。OpenShape 在 ModelNet40 上實現了 85.3% 的準確率,優於先前的零樣本基線方法 20%,並與一些完全監督方法表現相當。此外,我們展示了我們學到的嵌入式編碼涵蓋了各種視覺和語義概念(例如,子類別、顏色、形狀、風格),並促進了細粒度的文本-3D 和圖像-3D 交互。由於與 CLIP 嵌入的對齊,我們學到的形狀表示還可以與現成的基於 CLIP 的模型集成,用於各種應用,如點雲標題生成和點雲條件下的圖像生成。
English
We introduce OpenShape, a method for learning multi-modal joint representations of text, image, and point clouds. We adopt the commonly used multi-modal contrastive learning framework for representation alignment, but with a specific focus on scaling up 3D representations to enable open-world 3D shape understanding. To achieve this, we scale up training data by ensembling multiple 3D datasets and propose several strategies to automatically filter and enrich noisy text descriptions. We also explore and compare strategies for scaling 3D backbone networks and introduce a novel hard negative mining module for more efficient training. We evaluate OpenShape on zero-shot 3D classification benchmarks and demonstrate its superior capabilities for open-world recognition. Specifically, OpenShape achieves a zero-shot accuracy of 46.8% on the 1,156-category Objaverse-LVIS benchmark, compared to less than 10% for existing methods. OpenShape also achieves an accuracy of 85.3% on ModelNet40, outperforming previous zero-shot baseline methods by 20% and performing on par with some fully-supervised methods. Furthermore, we show that our learned embeddings encode a wide range of visual and semantic concepts (e.g., subcategories, color, shape, style) and facilitate fine-grained text-3D and image-3D interactions. Due to their alignment with CLIP embeddings, our learned shape representations can also be integrated with off-the-shelf CLIP-based models for various applications, such as point cloud captioning and point cloud-conditioned image generation.
PDF64December 15, 2024