ChatPaper.aiChatPaper

三维形状生成中的记忆化现象:一项实证研究

Memorization in 3D Shape Generation: An Empirical Study

December 29, 2025
作者: Shu Pu, Boya Zeng, Kaichen Zhou, Mengyu Wang, Zhuang Liu
cs.AI

摘要

生成模型在三维视觉中正被广泛用于合成新形状,但其生成过程是否依赖于对训练样本的记忆仍不明确。理解模型的记忆机制有助于防止训练数据泄露并提升生成结果的多样性。本文设计了一个评估框架来量化三维生成模型的记忆程度,并研究不同数据与建模方案对记忆行为的影响。我们首先应用该框架量化现有方法的记忆水平,随后通过基于隐向量集(Vecset)扩散模型的对照实验发现:在数据层面,记忆程度受数据模态影响,并随数据多样性增加和条件信息细化而上升;在建模层面,记忆水平在中等指导强度时达到峰值,但可通过延长Vecset长度和简单旋转增强来抑制。本研究的框架与分析为三维生成模型的记忆现象提供了实证依据,并提出在不降低生成质量的前提下减少记忆的简易有效策略。代码已开源:https://github.com/zlab-princeton/3d_mem。
English
Generative models are increasingly used in 3D vision to synthesize novel shapes, yet it remains unclear whether their generation relies on memorizing training shapes. Understanding their memorization could help prevent training data leakage and improve the diversity of generated results. In this paper, we design an evaluation framework to quantify memorization in 3D generative models and study the influence of different data and modeling designs on memorization. We first apply our framework to quantify memorization in existing methods. Next, through controlled experiments with a latent vector-set (Vecset) diffusion model, we find that, on the data side, memorization depends on data modality, and increases with data diversity and finer-grained conditioning; on the modeling side, it peaks at a moderate guidance scale and can be mitigated by longer Vecsets and simple rotation augmentation. Together, our framework and analysis provide an empirical understanding of memorization in 3D generative models and suggest simple yet effective strategies to reduce it without degrading generation quality. Our code is available at https://github.com/zlab-princeton/3d_mem.
PDF21January 10, 2026