一秒內實現的清晰單目視圖合成
Sharp Monocular View Synthesis in Less Than a Second
December 11, 2025
作者: Lars Mescheder, Wei Dong, Shiwei Li, Xuyang Bai, Marcel Santos, Peiyun Hu, Bruno Lecouat, Mingmin Zhen, Amaël Delaunoy, Tian Fang, Yanghai Tsin, Stephan R. Richter, Vladlen Koltun
cs.AI
摘要
我们提出SHARP方法,一种基于单张图像的光真实感视图合成技术。该方法通过单张输入照片,即可回归出场景的3D高斯表示参数。在标准GPU上,仅需单次神经网络前向传播即可在1秒内完成计算。SHARP生成的3D高斯表示支持实时渲染,能够为邻近视角生成高分辨率的光真实感图像。该表示具有绝对尺度的度量特性,可支持度量级相机位移。实验结果表明,SHARP在不同数据集上均展现出强大的零样本泛化能力。在多个基准测试中,该方法相较现有最优模型将LPIPS指标降低25-34%,DISTS指标降低21-43%,同时将合成时间缩短三个数量级,确立了新的技术标杆。相关代码与权重文件已发布于https://github.com/apple/ml-sharp。
English
We present SHARP, an approach to photorealistic view synthesis from a single image. Given a single photograph, SHARP regresses the parameters of a 3D Gaussian representation of the depicted scene. This is done in less than a second on a standard GPU via a single feedforward pass through a neural network. The 3D Gaussian representation produced by SHARP can then be rendered in real time, yielding high-resolution photorealistic images for nearby views. The representation is metric, with absolute scale, supporting metric camera movements. Experimental results demonstrate that SHARP delivers robust zero-shot generalization across datasets. It sets a new state of the art on multiple datasets, reducing LPIPS by 25-34% and DISTS by 21-43% versus the best prior model, while lowering the synthesis time by three orders of magnitude. Code and weights are provided at https://github.com/apple/ml-sharp