ChatPaper.aiChatPaper

360任意视角:基于无几何约束的图像与视频至360度全景转换技术

360Anything: Geometry-Free Lifting of Images and Videos to 360°

January 22, 2026
作者: Ziyi Wu, Daniel Watson, Andrea Tagliasacchi, David J. Fleet, Marcus A. Brubaker, Saurabh Saxena
cs.AI

摘要

将透视图像和视频提升至360度全景图可实现沉浸式3D世界生成。现有方法通常依赖于透视与等距柱状投影空间之间的显式几何对齐,但这需要已知相机元数据,限制了该方法在校准信息通常缺失或存在噪声的野外数据中的应用。我们提出360Anything——一个基于预训练扩散变换器的无几何框架。通过将透视输入和全景目标简单视为令牌序列,360Anything以纯数据驱动的方式学习透视到等距柱状投影的映射,无需相机信息。我们的方法在图像和视频的透视转360度生成任务上均实现了最先进性能,超越了使用真实相机信息的先前工作。我们还追踪到等距柱状投影边界接缝伪影的根本原因在于VAE编码器中的零填充操作,并引入环形潜在编码以实现无缝生成。最后,我们在零样本相机视场角和朝向估计基准测试中展示了具有竞争力的结果,证明了360Anything对几何关系的深层理解及其在计算机视觉任务中的广泛适用性。更多结果请访问https://360anything.github.io/。
English
Lifting perspective images and videos to 360° panoramas enables immersive 3D world generation. Existing approaches often rely on explicit geometric alignment between the perspective and the equirectangular projection (ERP) space. Yet, this requires known camera metadata, obscuring the application to in-the-wild data where such calibration is typically absent or noisy. We propose 360Anything, a geometry-free framework built upon pre-trained diffusion transformers. By treating the perspective input and the panorama target simply as token sequences, 360Anything learns the perspective-to-equirectangular mapping in a purely data-driven way, eliminating the need for camera information. Our approach achieves state-of-the-art performance on both image and video perspective-to-360° generation, outperforming prior works that use ground-truth camera information. We also trace the root cause of the seam artifacts at ERP boundaries to zero-padding in the VAE encoder, and introduce Circular Latent Encoding to facilitate seamless generation. Finally, we show competitive results in zero-shot camera FoV and orientation estimation benchmarks, demonstrating 360Anything's deep geometric understanding and broader utility in computer vision tasks. Additional results are available at https://360anything.github.io/.
PDF51January 24, 2026