ZeroShape:基于回归的零样本形状重建
ZeroShape: Regression-based Zero-shot Shape Reconstruction
December 21, 2023
作者: Zixuan Huang, Stefan Stojanov, Anh Thai, Varun Jampani, James M. Rehg
cs.AI
摘要
我们研究单图零样本3D形状重建问题。最近的研究通过生成建模学习零样本形状重建,但这些模型在训练和推断时计算成本高昂。相比之下,传统方法是基于回归的,即训练确定性模型直接回归物体形状。这种回归方法比生成方法具有更高的计算效率。这引发了一个自然问题:生成建模对于高性能是否必要,或者相反,基于回归的方法仍然具有竞争力?为了回答这个问题,我们设计了一个强大的基于回归的模型,称为ZeroShape,基于这一领域的收敛发现和新颖见解。我们还精心策划了一个大型真实世界评估基准,包括来自三个不同真实世界3D数据集的物体。这个评估基准比先前研究用于定量评估其模型的更加多样化,规模也大了一个数量级,旨在减少我们领域中的评估方差。我们展示了ZeroShape不仅实现了优越的性能,而且表现出显著更高的计算和数据效率。
English
We study the problem of single-image zero-shot 3D shape reconstruction.
Recent works learn zero-shot shape reconstruction through generative modeling
of 3D assets, but these models are computationally expensive at train and
inference time. In contrast, the traditional approach to this problem is
regression-based, where deterministic models are trained to directly regress
the object shape. Such regression methods possess much higher computational
efficiency than generative methods. This raises a natural question: is
generative modeling necessary for high performance, or conversely, are
regression-based approaches still competitive? To answer this, we design a
strong regression-based model, called ZeroShape, based on the converging
findings in this field and a novel insight. We also curate a large real-world
evaluation benchmark, with objects from three different real-world 3D datasets.
This evaluation benchmark is more diverse and an order of magnitude larger than
what prior works use to quantitatively evaluate their models, aiming at
reducing the evaluation variance in our field. We show that ZeroShape not only
achieves superior performance over state-of-the-art methods, but also
demonstrates significantly higher computational and data efficiency.