ChatPaper.aiChatPaper

NAVI:具有高質量3D形狀和姿勢標註的類別不可知圖像集合

NAVI: Category-Agnostic Image Collections with High-Quality 3D Shape and Pose Annotations

June 15, 2023
作者: Varun Jampani, Kevis-Kokitsi Maninis, Andreas Engelhardt, Arjun Karpur, Karen Truong, Kyle Sargent, Stefan Popov, André Araujo, Ricardo Martin-Brualla, Kaushal Patel, Daniel Vlasic, Vittorio Ferrari, Ameesh Makadia, Ce Liu, Yuanzhen Li, Howard Zhou
cs.AI

摘要

最近神經重建技術的進步使得可以從隨意拍攝的圖像集合中進行高質量的3D物體重建。目前的技術主要在相對簡單的圖像集合上分析其進展,這些集合中結構從運動(SfM)技術可以提供地面真實(GT)相機姿勢。我們注意到SfM技術在野外圖像集合(例如具有不同背景和光線的圖像搜索結果)上往往失敗。為了促進從隨意拍攝的圖像中進行3D重建的系統性研究進展,我們提出了NAVI:一個新的類別不可知的物體圖像集合數據集,具有高質量的3D掃描,以及每個圖像的2D-3D對齊,提供幾乎完美的GT相機參數。這些2D-3D對齊使我們能夠提取準確的導數標註,例如密集像素對應、深度和分割地圖。我們展示了在不同問題設置上使用NAVI圖像集合的情況,並顯示NAVI使得可以進行更全面的評估,這是現有數據集無法實現的。我們認為NAVI對於促進3D重建和對應估計的系統性研究進展是有益的。項目頁面:https://navidataset.github.io
English
Recent advances in neural reconstruction enable high-quality 3D object reconstruction from casually captured image collections. Current techniques mostly analyze their progress on relatively simple image collections where Structure-from-Motion (SfM) techniques can provide ground-truth (GT) camera poses. We note that SfM techniques tend to fail on in-the-wild image collections such as image search results with varying backgrounds and illuminations. To enable systematic research progress on 3D reconstruction from casual image captures, we propose NAVI: a new dataset of category-agnostic image collections of objects with high-quality 3D scans along with per-image 2D-3D alignments providing near-perfect GT camera parameters. These 2D-3D alignments allow us to extract accurate derivative annotations such as dense pixel correspondences, depth and segmentation maps. We demonstrate the use of NAVI image collections on different problem settings and show that NAVI enables more thorough evaluations that were not possible with existing datasets. We believe NAVI is beneficial for systematic research progress on 3D reconstruction and correspondence estimation. Project page: https://navidataset.github.io
PDF40December 15, 2024