ChatPaper.aiChatPaper

单次多视角多人视频的连贯人体场景重建

Coherent Human-Scene Reconstruction from Multi-Person Multi-View Video in a Single Pass

March 13, 2026
作者: Sangmin Kim, Minhyuk Hwang, Geonho Cha, Dongyoon Wee, Jaesik Park
cs.AI

摘要

近期三维基础模型的进展使得人体及其周边环境重建研究日益受到关注。然而现有方法多聚焦于单目输入,将其扩展至多视角场景需依赖附加模块或预处理数据。为此,我们提出CHROMM统一框架,可直接从多人多视角视频中联合估计相机参数、场景点云及人体网格,无需外部模块或预处理。该框架将Pi3X的几何先验与Multi-HMR的人体先验整合至单一可训练神经网络架构,并引入尺度调整模块以解决人体与场景的尺度差异问题。我们还提出多视角融合策略,在测试阶段将各视角估计聚合为统一表征。此外,基于几何信息的多人关联方法相较基于表观的方法更具鲁棒性。在EMDB、RICH、EgoHumans和EgoExo4D数据集上的实验表明,CHROMM在全局人体运动与多视角姿态估计任务中达到先进性能,且运行速度较传统基于优化的多视角方法提升8倍以上。项目页面:https://nstar1125.github.io/chromm。
English
Recent advances in 3D foundation models have led to growing interest in reconstructing humans and their surrounding environments. However, most existing approaches focus on monocular inputs, and extending them to multi-view settings requires additional overhead modules or preprocessed data. To this end, we present CHROMM, a unified framework that jointly estimates cameras, scene point clouds, and human meshes from multi-person multi-view videos without relying on external modules or preprocessing. We integrate strong geometric and human priors from Pi3X and Multi-HMR into a single trainable neural network architecture, and introduce a scale adjustment module to solve the scale discrepancy between humans and the scene. We also introduce a multi-view fusion strategy to aggregate per-view estimates into a single representation at test-time. Finally, we propose a geometry-based multi-person association method, which is more robust than appearance-based approaches. Experiments on EMDB, RICH, EgoHumans, and EgoExo4D show that CHROMM achieves competitive performance in global human motion and multi-view pose estimation while running over 8x faster than prior optimization-based multi-view approaches. Project page: https://nstar1125.github.io/chromm.
PDF13March 20, 2026