MIND-V:基於強化學習物理校準的長時程機器人操作分層式影片生成
MIND-V: Hierarchical Video Generation for Long-Horizon Robotic Manipulation with RL-based Physical Alignment
December 7, 2025
作者: Ruicheng Zhang, Mingyang Zhang, Jun Zhou, Zhangrui Guo, Xiaofan Liu, Zunnan Xu, Zhizhou Zhong, Puxin Yan, Haocheng Luo, Xiu Li
cs.AI
摘要
體現式模仿學習受限於多樣化、長時程機械臂操作數據的匱乏。現有該領域的影片生成模型僅能合成簡單動作的短片段,且常依賴手動定義的軌跡。為此,我們提出MIND-V——一個分層框架,旨在合成物理合理且邏輯連貫的長時程機械臂操作影片。受認知科學啟發,MIND-V通過三個核心組件銜接高層推理與像素級合成:語義推理中心(SRH)利用預訓練視覺語言模型進行任務規劃;行為語義橋樑(BSB)將抽象指令轉譯為領域無關表徵;運動影片生成器(MVG)實現條件式影片渲染。MIND-V採用分階段視覺未來推演策略,這是一種測試時優化方法以增強長時程魯棒性。為使生成影片符合物理定律,我們引入基於新型物理前瞻一致性(PFC)獎勵的GRPO強化學習後訓練階段。PFC利用V-JEPA世界模型,通過對齊特徵空間中預測與實際的動態演化來強化物理合理性。MIND-V在長時程機械臂操作影片生成任務中展現尖端性能,為體現式數據合成建立了可擴展且可控的範式。
English
Embodied imitation learning is constrained by the scarcity of diverse, long-horizon robotic manipulation data. Existing video generation models for this domain are limited to synthesizing short clips of simple actions and often rely on manually defined trajectories. To this end, we introduce MIND-V, a hierarchical framework designed to synthesize physically plausible and logically coherent videos of long-horizon robotic manipulation. Inspired by cognitive science, MIND-V bridges high-level reasoning with pixel-level synthesis through three core components: a Semantic Reasoning Hub (SRH) that leverages a pre-trained vision-language model for task planning; a Behavioral Semantic Bridge (BSB) that translates abstract instructions into domain-invariant representations; and a Motor Video Generator (MVG) for conditional video rendering. MIND-V employs Staged Visual Future Rollouts, a test-time optimization strategy to enhance long-horizon robustness. To align the generated videos with physical laws, we introduce a GRPO reinforcement learning post-training phase guided by a novel Physical Foresight Coherence (PFC) reward. PFC leverages the V-JEPA world model to enforce physical plausibility by aligning the predicted and actual dynamic evolutions in the feature space. MIND-V demonstrates state-of-the-art performance in long-horizon robotic manipulation video generation, establishing a scalable and controllable paradigm for embodied data synthesis.