ChatPaper.aiChatPaper

3DTV:面向实时视图合成的前馈插值网络

3DTV: A Feedforward Interpolation Network for Real-Time View Synthesis

April 13, 2026
作者: Stefan Schulz, Fernando Edelstein, Hannah Dröge, Matthias B. Hullin, Markus Plack
cs.AI

摘要

实时自由视点渲染需要兼顾多相机冗余性与交互应用的延迟约束。为解决这一难题,我们结合轻量级几何与学习技术,提出了3DTV——一种用于实时稀疏视角插值的前馈网络。基于Delaunay的三元组选择机制确保每个目标视角具备充分的角度覆盖。在此基础上,我们引入位姿感知深度模块,通过从粗到精的金字塔深度估计实现高效的特征重投影与遮挡感知融合。与需要场景专属优化的方法不同,3DTV无需重新训练即可前馈运行,使其在增强现实/虚拟现实、远程呈现和交互应用中具备实用性。在挑战性多视角视频数据集上的实验表明,3DTV持续实现质量与效率的优异平衡,性能超越近年实时新视角基线方法。关键的是,3DTV避免使用显式代理模型,从而在不同场景中实现鲁棒渲染。这使其成为低延迟多视角流传输与交互渲染的实用解决方案。 项目页面:https://stefanmschulz.github.io/3DTV_webpage/
English
Real-time free-viewpoint rendering requires balancing multi-camera redundancy with the latency constraints of interactive applications. We address this challenge by combining lightweight geometry with learning and propose 3DTV, a feedforward network for real-time sparse-view interpolation. A Delaunay-based triplet selection ensures angular coverage for each target view. Building on this, we introduce a pose-aware depth module that estimates a coarse-to-fine depth pyramid, enabling efficient feature reprojection and occlusion-aware blending. Unlike methods that require scene-specific optimization, 3DTV runs feedforward without retraining, making it practical for AR/VR, telepresence, and interactive applications. Our experiments on challenging multi-view video datasets demonstrate that 3DTV consistently achieves a strong balance of quality and efficiency, outperforming recent real-time novel-view baselines. Crucially, 3DTV avoids explicit proxies, enabling robust rendering across diverse scenes. This makes it a practical solution for low-latency multi-view streaming and interactive rendering. Project Page: https://stefanmschulz.github.io/3DTV_webpage/
PDF11April 16, 2026