ChatPaper.aiChatPaper

CRISP:基于平面场景基元的单目视频接触引导实景转仿真技术

CRISP: Contact-Guided Real2Sim from Monocular Video with Planar Scene Primitives

December 16, 2025
作者: Zihan Wang, Jiashun Wang, Jeff Tan, Yiwen Zhao, Jessica Hodgins, Shubham Tulsiani, Deva Ramanan
cs.AI

摘要

我们提出CRISP方法,该方法能够从单目视频中重建可模拟的人体运动与场景几何。现有的人体-场景联合重建研究要么依赖数据驱动的先验知识和无物理约束的联合优化,要么重建出的几何模型存在噪点与伪影,导致包含场景交互的运动追踪策略失效。与此不同,我们的核心思路是通过对场景点云进行平面图元拟合,利用深度、法向量和光流信息的简单聚类流程,重建出凸面、洁净且可直接用于仿真的几何模型。为还原交互过程中可能被遮挡的场景几何,我们采用人体-场景接触建模技术(例如利用人体姿态重建被遮挡的椅子座面)。最后,通过强化学习驱动人形控制器,我们确保人体与场景重建结果符合物理规律。在以人为中心的视频基准测试(EMDB、PROX)中,本方法将运动追踪失败率从55.2%降至6.9%,同时强化学习仿真吞吐量提升43%。我们进一步在野外视频(包括随手拍摄视频、网络视频乃至Sora生成视频)上验证了该方法,证明CRISP能大规模生成物理有效的人体运动与交互环境,显著推进机器人及AR/VR领域的实景仿真应用。
English
We introduce CRISP, a method that recovers simulatable human motion and scene geometry from monocular video. Prior work on joint human-scene reconstruction relies on data-driven priors and joint optimization with no physics in the loop, or recovers noisy geometry with artifacts that cause motion tracking policies with scene interactions to fail. In contrast, our key insight is to recover convex, clean, and simulation-ready geometry by fitting planar primitives to a point cloud reconstruction of the scene, via a simple clustering pipeline over depth, normals, and flow. To reconstruct scene geometry that might be occluded during interactions, we make use of human-scene contact modeling (e.g., we use human posture to reconstruct the occluded seat of a chair). Finally, we ensure that human and scene reconstructions are physically-plausible by using them to drive a humanoid controller via reinforcement learning. Our approach reduces motion tracking failure rates from 55.2\% to 6.9\% on human-centric video benchmarks (EMDB, PROX), while delivering a 43\% faster RL simulation throughput. We further validate it on in-the-wild videos including casually-captured videos, Internet videos, and even Sora-generated videos. This demonstrates CRISP's ability to generate physically-valid human motion and interaction environments at scale, greatly advancing real-to-sim applications for robotics and AR/VR.
PDF62December 18, 2025