Embody 3D:大规模多模态运动与行为数据集
Embody 3D: A Large-scale Multimodal Motion and Behavior Dataset
October 17, 2025
作者: Claire McLean, Makenzie Meendering, Tristan Swartz, Orri Gabbay, Alexandra Olsen, Rachel Jacobs, Nicholas Rosen, Philippe de Bree, Tony Garcia, Gadsden Merrill, Jake Sandakly, Julia Buffalini, Neham Jain, Steven Krenn, Moneish Kumar, Dejan Markovic, Evonne Ng, Fabian Prada, Andrew Saba, Siwei Zhang, Vasu Agrawal, Tim Godisart, Alexander Richard, Michael Zollhoefer
cs.AI
摘要
Meta旗下的Codec Avatars实验室推出了Embody 3D,这是一个多模态数据集,包含了来自439名参与者在多摄像头采集阶段收集的500小时个人3D运动数据,总计超过5400万帧的3D运动追踪。该数据集涵盖了广泛的单人运动数据,包括指定动作、手势和移动;以及多人的行为和对话数据,如讨论、不同情绪状态下的对话、协作活动,以及在类似公寓空间中的共同生活场景。我们提供了包括手部追踪和身体形态在内的人体运动追踪数据、文本注释,以及每位参与者的独立音频轨道。
English
The Codec Avatars Lab at Meta introduces Embody 3D, a multimodal dataset of
500 individual hours of 3D motion data from 439 participants collected in a
multi-camera collection stage, amounting to over 54 million frames of tracked
3D motion. The dataset features a wide range of single-person motion data,
including prompted motions, hand gestures, and locomotion; as well as
multi-person behavioral and conversational data like discussions, conversations
in different emotional states, collaborative activities, and co-living
scenarios in an apartment-like space. We provide tracked human motion including
hand tracking and body shape, text annotations, and a separate audio track for
each participant.