MotionEdit:以運動為核心的影像編輯基準測試與學習研究
MotionEdit: Benchmarking and Learning Motion-Centric Image Editing
December 11, 2025
作者: Yixin Wan, Lei Ke, Wenhao Yu, Kai-Wei Chang, Dong Yu
cs.AI
摘要
我們推出MotionEdit——一個專注於運動導向影像編輯的全新資料集,該任務旨在修改主體動作與互動關係,同時保持身分特徵、結構完整性與物理合理性。有別於現有聚焦靜態外觀調整或僅含稀疏低品質運動編輯的資料集,MotionEdit透過從連續影片提取並驗證的真實運動轉換,提供描繪高擬真度運動變化的影像配對。此新任務不僅具科學挑戰性,更擁有實際應用價值,能驅動如幀控影片合成與動畫生成等下游應用。
為評估模型在此新任務的表現,我們提出MotionEdit-Bench基準測試,透過生成式、判別式與偏好型指標,全面檢驗模型處理運動導向編輯的能力。基準結果顯示,現有基於擴散模型的頂尖編輯技術仍難以應對運動編輯挑戰。為此,我們設計MotionNFT(運動導向負向感知微調框架),此訓練後框架透過計算輸入影像與模型編輯影像間運動流與真實運動的匹配度,產生運動對齊獎勵信號,引導模型實現精準的運動轉換。在FLUX.1 Kontext與Qwen-Image-Edit上的大量實驗表明,MotionNFT能在不損害通用編輯能力的前提下,持續提升基礎模型於運動編輯任務的品質與運動擬真度,驗證其有效性。
English
We introduce MotionEdit, a novel dataset for motion-centric image editing-the task of modifying subject actions and interactions while preserving identity, structure, and physical plausibility. Unlike existing image editing datasets that focus on static appearance changes or contain only sparse, low-quality motion edits, MotionEdit provides high-fidelity image pairs depicting realistic motion transformations extracted and verified from continuous videos. This new task is not only scientifically challenging but also practically significant, powering downstream applications such as frame-controlled video synthesis and animation.
To evaluate model performance on the novel task, we introduce MotionEdit-Bench, a benchmark that challenges models on motion-centric edits and measures model performance with generative, discriminative, and preference-based metrics. Benchmark results reveal that motion editing remains highly challenging for existing state-of-the-art diffusion-based editing models. To address this gap, we propose MotionNFT (Motion-guided Negative-aware Fine Tuning), a post-training framework that computes motion alignment rewards based on how well the motion flow between input and model-edited images matches the ground-truth motion, guiding models toward accurate motion transformations. Extensive experiments on FLUX.1 Kontext and Qwen-Image-Edit show that MotionNFT consistently improves editing quality and motion fidelity of both base models on the motion editing task without sacrificing general editing ability, demonstrating its effectiveness.