Fast3Dcache:免训练的3D几何合成加速技术
Fast3Dcache: Training-free 3D Geometry Synthesis Acceleration
November 27, 2025
作者: Mengyu Yang, Yanming Yang, Chenyi Xu, Chenxi Song, Yufan Zuo, Tong Zhao, Ruibo Li, Chi Zhang
cs.AI
摘要
扩散模型在二维图像、视频及三维形状等多模态生成任务中展现出卓越的生成质量,但其迭代去噪过程导致推理计算成本高昂。尽管近期基于缓存的方法通过复用冗余计算有效加速了二维和视频生成,但直接将此类技术应用于三维扩散模型会严重破坏几何一致性。在三维合成中,缓存潜在特征的微小数值误差会不断累积,进而引发结构伪影和拓扑失配。为突破此局限,我们提出无需训练的几何感知缓存框架Fast3Dcache,在加速三维扩散推理的同时保持几何保真度。该方法通过预测性缓存调度约束(PCSC)根据体素稳定模式动态分配缓存配额,并利用时空稳定性准则(SSC)基于速度幅值和加速度标准筛选稳定特征进行复用。综合实验表明,Fast3Dcache可实现显著加速,推理速度最高提升27.12%,计算量(FLOPs)降低54.8%,且以倒角距离(2.48%)和F-Score(1.95%)衡量的几何质量损失极小。
English
Diffusion models have achieved impressive generative quality across modalities like 2D images, videos, and 3D shapes, but their inference remains computationally expensive due to the iterative denoising process. While recent caching-based methods effectively reuse redundant computations to speed up 2D and video generation, directly applying these techniques to 3D diffusion models can severely disrupt geometric consistency. In 3D synthesis, even minor numerical errors in cached latent features accumulate, causing structural artifacts and topological inconsistencies. To overcome this limitation, we propose Fast3Dcache, a training-free geometry-aware caching framework that accelerates 3D diffusion inference while preserving geometric fidelity. Our method introduces a Predictive Caching Scheduler Constraint (PCSC) to dynamically determine cache quotas according to voxel stabilization patterns and a Spatiotemporal Stability Criterion (SSC) to select stable features for reuse based on velocity magnitude and acceleration criterion. Comprehensive experiments show that Fast3Dcache accelerates inference significantly, achieving up to a 27.12% speed-up and a 54.8% reduction in FLOPs, with minimal degradation in geometric quality as measured by Chamfer Distance (2.48%) and F-Score (1.95%).