ChatPaper.aiChatPaper

Agent-to-Sim: 从日常长期视频中学习交互行为模型

Agent-to-Sim: Learning Interactive Behavior Models from Casual Longitudinal Videos

October 21, 2024
作者: Gengshan Yang, Andrea Bajcsy, Shunsuke Saito, Angjoo Kanazawa
cs.AI

摘要

我们提出了Agent-to-Sim(ATS)框架,用于从日常纵向视频集合中学习3D代理的互动行为模型。与依赖基于标记的跟踪和多视角摄像头的先前作品不同,ATS通过在单一环境中记录的视频观察,非侵入性地学习动物和人类代理的自然行为,这些视频观察跨越很长一段时间(例如,一个月)。对代理的3D行为建模需要在长时间段内持续进行3D跟踪(例如,知道哪个点对应于哪个点)。为了获得这样的数据,我们开发了一种粗到细的配准方法,通过一个规范的3D空间随时间跟踪代理和摄像机,从而产生完整且持久的时空4D表示。然后,我们使用从4D重建中查询的代理感知和运动的配对数据训练代理行为的生成模型。ATS实现了从代理的视频记录到互动行为模拟器的实时转换。我们通过智能手机捕获的单眼RGBD视频展示了在宠物(例如猫、狗、兔子)和人类上的结果。
English
We present Agent-to-Sim (ATS), a framework for learning interactive behavior models of 3D agents from casual longitudinal video collections. Different from prior works that rely on marker-based tracking and multiview cameras, ATS learns natural behaviors of animal and human agents non-invasively through video observations recorded over a long time-span (e.g., a month) in a single environment. Modeling 3D behavior of an agent requires persistent 3D tracking (e.g., knowing which point corresponds to which) over a long time period. To obtain such data, we develop a coarse-to-fine registration method that tracks the agent and the camera over time through a canonical 3D space, resulting in a complete and persistent spacetime 4D representation. We then train a generative model of agent behaviors using paired data of perception and motion of an agent queried from the 4D reconstruction. ATS enables real-to-sim transfer from video recordings of an agent to an interactive behavior simulator. We demonstrate results on pets (e.g., cat, dog, bunny) and human given monocular RGBD videos captured by a smartphone.

Summary

AI-Generated Summary

PDF52November 16, 2024