ChatPaper.aiChatPaper

RoboVIP:基于视觉身份提示增强的多视角视频生成提升机器人操作性能

RoboVIP: Multi-View Video Generation with Visual Identity Prompting Augments Robot Manipulation

January 8, 2026
作者: Boyang Wang, Haoran Zhang, Shujie Zhang, Jinkun Hao, Mingda Jia, Qi Lv, Yucheng Mao, Zhaoyang Lyu, Jia Zeng, Xudong Xu, Jiangmiao Pang
cs.AI

摘要

操纵数据的多样性、数量与质量对于训练有效的机器人策略至关重要。然而受硬件和物理环境限制,在多样化场景中大规模采集真实世界操纵数据仍难以实现。近期研究采用文本提示条件化的图像扩散模型,通过改变视觉观测中的背景和桌面物体来增强操纵数据。但这些方法往往忽视了先进策略模型对多视角与时序一致观测的实际需求,且仅凭文本提示无法可靠指定场景配置。为向扩散模型提供显式视觉引导,我们提出视觉身份提示技术,将示例图像作为条件输入来引导生成目标场景配置。为此,我们还构建了可扩展流水线,从大规模机器人数据集中筛选视觉身份池。使用增强后的操纵数据训练下游视觉-语言-动作与视觉运动策略模型,在仿真和真实机器人环境中均实现了持续性能提升。
English
The diversity, quantity, and quality of manipulation data are critical for training effective robot policies. However, due to hardware and physical setup constraints, collecting large-scale real-world manipulation data remains difficult to scale across diverse environments. Recent work uses text-prompt conditioned image diffusion models to augment manipulation data by altering the backgrounds and tabletop objects in the visual observations. However, these approaches often overlook the practical need for multi-view and temporally coherent observations required by state-of-the-art policy models. Further, text prompts alone cannot reliably specify the scene setup. To provide the diffusion model with explicit visual guidance, we introduce visual identity prompting, which supplies exemplar images as conditioning inputs to guide the generation of the desired scene setup. To this end, we also build a scalable pipeline to curate a visual identity pool from large robotics datasets. Using our augmented manipulation data to train downstream vision-language-action and visuomotor policy models yields consistent performance gains in both simulation and real-robot settings.
PDF192January 10, 2026