EnerVerse-AC:基于动作条件的具身环境构想
EnerVerse-AC: Envisioning Embodied Environments with Action Condition
May 14, 2025
作者: Yuxin Jiang, Shengcong Chen, Siyuan Huang, Liliang Chen, Pengfei Zhou, Yue Liao, Xindong He, Chiming Liu, Hongsheng Li, Maoqing Yao, Guanghui Ren
cs.AI
摘要
机器人模仿学习已从解决静态任务发展到应对动态交互场景,但由于需要与动态环境进行实时交互,测试和评估仍然成本高昂且具有挑战性。我们提出了EnerVerse-AC(EVAC),一种基于动作条件的世界模型,该模型根据智能体预测的动作生成未来的视觉观测,从而实现逼真且可控的机器人推理。在现有架构的基础上,EVAC引入了多层次动作条件机制和射线图编码,用于动态多视角图像生成,同时通过扩展包含多样失败轨迹的训练数据来提升泛化能力。作为数据引擎和评估工具,EVAC将人类收集的轨迹扩展为多样化数据集,并生成逼真的、基于动作条件的视频观测用于策略测试,无需物理机器人或复杂仿真。这一方法在保持机器人操作评估高保真度的同时,显著降低了成本。大量实验验证了我们方法的有效性。代码、检查点和数据集可在<https://annaj2178.github.io/EnerverseAC.github.io>获取。
English
Robotic imitation learning has advanced from solving static tasks to
addressing dynamic interaction scenarios, but testing and evaluation remain
costly and challenging due to the need for real-time interaction with dynamic
environments. We propose EnerVerse-AC (EVAC), an action-conditional world model
that generates future visual observations based on an agent's predicted
actions, enabling realistic and controllable robotic inference. Building on
prior architectures, EVAC introduces a multi-level action-conditioning
mechanism and ray map encoding for dynamic multi-view image generation while
expanding training data with diverse failure trajectories to improve
generalization. As both a data engine and evaluator, EVAC augments
human-collected trajectories into diverse datasets and generates realistic,
action-conditioned video observations for policy testing, eliminating the need
for physical robots or complex simulations. This approach significantly reduces
costs while maintaining high fidelity in robotic manipulation evaluation.
Extensive experiments validate the effectiveness of our method. Code,
checkpoints, and datasets can be found at
<https://annaj2178.github.io/EnerverseAC.github.io>.Summary
AI-Generated Summary