體現三維:大規模多模態運動與行為數據集
Embody 3D: A Large-scale Multimodal Motion and Behavior Dataset
October 17, 2025
作者: Claire McLean, Makenzie Meendering, Tristan Swartz, Orri Gabbay, Alexandra Olsen, Rachel Jacobs, Nicholas Rosen, Philippe de Bree, Tony Garcia, Gadsden Merrill, Jake Sandakly, Julia Buffalini, Neham Jain, Steven Krenn, Moneish Kumar, Dejan Markovic, Evonne Ng, Fabian Prada, Andrew Saba, Siwei Zhang, Vasu Agrawal, Tim Godisart, Alexander Richard, Michael Zollhoefer
cs.AI
摘要
Meta公司的Codec Avatars實驗室推出了Embody 3D,這是一個多模態數據集,包含了來自439名參與者在多攝像頭採集階段收集的500個小時的3D運動數據,總計超過5400萬幀追蹤的3D運動。該數據集涵蓋了廣泛的單人運動數據,包括提示動作、手勢和移動;以及多人行為和對話數據,如討論、不同情緒狀態下的對話、協作活動,以及在類似公寓空間中的共同生活場景。我們提供了追蹤的人體運動數據,包括手部追蹤和體型、文本註釋,以及每位參與者的獨立音軌。
English
The Codec Avatars Lab at Meta introduces Embody 3D, a multimodal dataset of
500 individual hours of 3D motion data from 439 participants collected in a
multi-camera collection stage, amounting to over 54 million frames of tracked
3D motion. The dataset features a wide range of single-person motion data,
including prompted motions, hand gestures, and locomotion; as well as
multi-person behavioral and conversational data like discussions, conversations
in different emotional states, collaborative activities, and co-living
scenarios in an apartment-like space. We provide tracked human motion including
hand tracking and body shape, text annotations, and a separate audio track for
each participant.