SparseFlex:高分辨率與任意拓撲的三維形狀建模
SparseFlex: High-Resolution and Arbitrary-Topology 3D Shape Modeling
March 27, 2025
作者: Xianglong He, Zi-Xin Zou, Chia-Hao Chen, Yuan-Chen Guo, Ding Liang, Chun Yuan, Wanli Ouyang, Yan-Pei Cao, Yangguang Li
cs.AI
摘要
創建具有任意拓撲結構的高保真3D網格,包括開放表面和複雜內部結構,仍然是一個重大挑戰。現有的隱式場方法通常需要耗時且會降低細節的封閉轉換,而其他方法則難以處理高分辨率。本文介紹了SparseFlex,一種新穎的稀疏結構等值面表示方法,能夠直接從渲染損失中實現高達1024^3分辨率的可微分網格重建。SparseFlex結合了Flexicubes的精度與稀疏體素結構,將計算集中在表面鄰近區域,並高效處理開放表面。關鍵在於,我們引入了一種視錐感知的分段體素訓練策略,該策略在渲染時僅激活相關體素,顯著降低了內存消耗,並實現了高分辨率訓練。這也首次允許僅通過渲染監督來重建網格內部結構。基於此,我們展示了一個完整的形狀建模流程,通過訓練變分自編碼器(VAE)和校正流變換器來生成高質量的3D形狀。我們的實驗顯示了最先進的重建精度,與之前的方法相比,Chamfer Distance減少了約82%,F-score提高了約88%,並展示了生成具有任意拓撲結構的高分辨率、細節豐富的3D形狀的能力。通過實現基於渲染損失的高分辨率、可微分網格重建與生成,SparseFlex在3D形狀表示與建模領域顯著推動了技術前沿。
English
Creating high-fidelity 3D meshes with arbitrary topology, including open
surfaces and complex interiors, remains a significant challenge. Existing
implicit field methods often require costly and detail-degrading watertight
conversion, while other approaches struggle with high resolutions. This paper
introduces SparseFlex, a novel sparse-structured isosurface representation that
enables differentiable mesh reconstruction at resolutions up to 1024^3
directly from rendering losses. SparseFlex combines the accuracy of Flexicubes
with a sparse voxel structure, focusing computation on surface-adjacent regions
and efficiently handling open surfaces. Crucially, we introduce a frustum-aware
sectional voxel training strategy that activates only relevant voxels during
rendering, dramatically reducing memory consumption and enabling
high-resolution training. This also allows, for the first time, the
reconstruction of mesh interiors using only rendering supervision. Building
upon this, we demonstrate a complete shape modeling pipeline by training a
variational autoencoder (VAE) and a rectified flow transformer for high-quality
3D shape generation. Our experiments show state-of-the-art reconstruction
accuracy, with a ~82% reduction in Chamfer Distance and a ~88% increase in
F-score compared to previous methods, and demonstrate the generation of
high-resolution, detailed 3D shapes with arbitrary topology. By enabling
high-resolution, differentiable mesh reconstruction and generation with
rendering losses, SparseFlex significantly advances the state-of-the-art in 3D
shape representation and modeling.Summary
AI-Generated Summary