ChatPaper.aiChatPaper

Fast3Dcache:免训练的3D几何合成加速技术

Fast3Dcache: Training-free 3D Geometry Synthesis Acceleration

November 27, 2025
作者: Mengyu Yang, Yanming Yang, Chenyi Xu, Chenxi Song, Yufan Zuo, Tong Zhao, Ruibo Li, Chi Zhang
cs.AI

摘要

扩散模型在二维图像、视频和三维形状等多模态生成任务中已展现出卓越的生成质量,但其迭代去噪过程导致推理计算成本高昂。虽然近期基于缓存的方法通过重用冗余计算有效加速了二维图像和视频生成,但将这些技术直接应用于三维扩散模型会严重破坏几何一致性。在三维合成中,缓存潜在特征的微小数值误差会不断累积,导致结构伪影和拓扑不一致。为突破此局限,我们提出Fast3Dcache——一种免训练的几何感知缓存框架,在保持几何保真度的同时加速三维扩散推理。该方法通过预测性缓存调度约束(PCSC)根据体素稳定模式动态分配缓存配额,并利用时空稳定性准则(SSC)基于速度幅值和加速度标准选择稳定特征进行复用。综合实验表明,Fast3Dcache可显著加速推理,实现最高27.12%的加速比和54.8%的浮点运算量降低,同时以倒角距离(2.48%)和F-Score(1.95%)衡量的几何质量损失极小。
English
Diffusion models have achieved impressive generative quality across modalities like 2D images, videos, and 3D shapes, but their inference remains computationally expensive due to the iterative denoising process. While recent caching-based methods effectively reuse redundant computations to speed up 2D and video generation, directly applying these techniques to 3D diffusion models can severely disrupt geometric consistency. In 3D synthesis, even minor numerical errors in cached latent features accumulate, causing structural artifacts and topological inconsistencies. To overcome this limitation, we propose Fast3Dcache, a training-free geometry-aware caching framework that accelerates 3D diffusion inference while preserving geometric fidelity. Our method introduces a Predictive Caching Scheduler Constraint (PCSC) to dynamically determine cache quotas according to voxel stabilization patterns and a Spatiotemporal Stability Criterion (SSC) to select stable features for reuse based on velocity magnitude and acceleration criterion. Comprehensive experiments show that Fast3Dcache accelerates inference significantly, achieving up to a 27.12% speed-up and a 54.8% reduction in FLOPs, with minimal degradation in geometric quality as measured by Chamfer Distance (2.48%) and F-Score (1.95%).
PDF11December 2, 2025