ChatPaper.aiChatPaper

MoRL:面向统一运动理解与生成的强化推理框架

MoRL: Reinforced Reasoning for Unified Motion Understanding and Generation

February 16, 2026
作者: Hongpeng Wang, Zeyu Zhang, Wenhao Li, Hao Tang
cs.AI

摘要

人体运动理解与生成是视觉与机器人技术的关键课题,但其推理能力和测试时规划能力仍存在局限。我们提出MoRL——一种通过监督微调与可验证奖励强化学习训练的统一多模态运动模型。针对特定任务的奖励设计结合了语义对齐与推理连贯性以提升理解能力,融合物理合理性与文本-运动一致性以优化生成效果,从而同步增强逻辑推理与感知真实性。为进一步优化推理过程,我们提出链式运动推理法(CoM),这是一种支持逐步规划与反思的测试时推理方法。我们还构建了两个大规模思维链数据集MoUnd-CoT-140K与MoGen-CoT-140K,将运动序列与推理轨迹及动作描述进行对齐。在HumanML3D和KIT-ML上的实验表明,MoRL相较现有最优基线模型取得显著提升。代码地址:https://github.com/AIGeeksGroup/MoRL。项目网站:https://aigeeksgroup.github.io/MoRL。
English
Human motion understanding and generation are crucial for vision and robotics but remain limited in reasoning capability and test-time planning. We propose MoRL, a unified multimodal motion model trained with supervised fine-tuning and reinforcement learning with verifiable rewards. Our task-specific reward design combines semantic alignment and reasoning coherence for understanding with physical plausibility and text-motion consistency for generation, improving both logical reasoning and perceptual realism. To further enhance inference, we introduce Chain-of-Motion (CoM), a test-time reasoning method that enables step-by-step planning and reflection. We also construct two large-scale CoT datasets, MoUnd-CoT-140K and MoGen-CoT-140K, to align motion sequences with reasoning traces and action descriptions. Experiments on HumanML3D and KIT-ML show that MoRL achieves significant gains over state-of-the-art baselines. Code: https://github.com/AIGeeksGroup/MoRL. Website: https://aigeeksgroup.github.io/MoRL.
PDF22February 18, 2026