ChatPaper.aiChatPaper

Interp3D:基于对应关系的生成式纹理三维变形插值

Interp3D: Correspondence-aware Interpolation for Generative Textured 3D Morphing

January 20, 2026
作者: Xiaolu Liu, Yicong Li, Qiyuan He, Jiayin Zhu, Wei Ji, Angela Yao, Jianke Zhu
cs.AI

摘要

纹理化三维形变旨在生成两个三维资产之间平滑合理的过渡,既保持结构连贯性又保留细粒度外观。该能力不仅对推进三维生成研究至关重要,在动画制作、模型编辑和数字内容创作等实际应用中也具有重要价值。现有方法要么直接操作几何体,导致仅能实现纯形状形变而忽略纹理;要么将二维插值策略简单扩展至三维,常引发语义模糊、结构错位和纹理模糊等问题。这些挑战凸显了在过渡过程中同时保持几何一致性、纹理对齐和鲁棒性的必要性。为此,我们提出Interp3D——一种无需训练的新型纹理化三维形变框架。该框架利用生成先验并采用渐进式对齐原则,确保几何保真度与纹理连贯性。Interp3D从条件空间的语义对齐插值出发,通过SLAT(结构化潜空间)引导的结构插值强化结构一致性,最终通过细粒度纹理融合实现外观细节迁移。为进行全面评估,我们构建了具有分级难度系数的专用数据集Interp3DData,并从保真度、过渡平滑度和合理性三个维度评估生成结果。定量指标与人工评估均表明,我们所提方法较以往技术具有显著优势。源代码已发布于https://github.com/xiaolul2/Interp3D。
English
Textured 3D morphing seeks to generate smooth and plausible transitions between two 3D assets, preserving both structural coherence and fine-grained appearance. This ability is crucial not only for advancing 3D generation research but also for practical applications in animation, editing, and digital content creation. Existing approaches either operate directly on geometry, limiting them to shape-only morphing while neglecting textures, or extend 2D interpolation strategies into 3D, which often causes semantic ambiguity, structural misalignment, and texture blurring. These challenges underscore the necessity to jointly preserve geometric consistency, texture alignment, and robustness throughout the transition process. To address this, we propose Interp3D, a novel training-free framework for textured 3D morphing. It harnesses generative priors and adopts a progressive alignment principle to ensure both geometric fidelity and texture coherence. Starting from semantically aligned interpolation in condition space, Interp3D enforces structural consistency via SLAT (Structured Latent)-guided structure interpolation, and finally transfers appearance details through fine-grained texture fusion. For comprehensive evaluations, we construct a dedicated dataset, Interp3DData, with graded difficulty levels and assess generation results from fidelity, transition smoothness, and plausibility. Both quantitative metrics and human studies demonstrate the significant advantages of our proposed approach over previous methods. Source code is available at https://github.com/xiaolul2/Interp3D.
PDF11January 28, 2026