觀摩與學習:從線上影片中掌握電腦操作技能
Watch and Learn: Learning to Use Computers from Online Videos
October 6, 2025
作者: Chan Hee Song, Yiwen Song, Palash Goyal, Yu Su, Oriana Riva, Hamid Palangi, Tomas Pfister
cs.AI
摘要
计算机使用代理(CUAs)需在多样且不断变化的应用与环境中规划任务流程,然而学习过程因目标应用领域内大规模、高质量训练数据的匮乏而受阻。现有数据集多局限于特定领域、静态且标注成本高昂,而当前合成数据生成方法往往产生过于简化或与任务不符的演示。为克服这些局限,我们提出了“观察与学习”(Watch & Learn, W&L)框架,该框架能够将互联网上易得的人类演示视频大规模转化为可执行的用户界面(UI)轨迹。不同于直接生成轨迹或依赖临时推理启发式方法,我们将问题转化为逆动力学目标:从连续屏幕状态预测用户行为。这一表述减少了人工工程需求,更易于学习,并能更稳健地跨应用泛化。具体而言,我们开发了一个包含任务感知视频检索的逆动力学标注流程,从原始网络视频中生成超过53,000条高质量轨迹,并证明这些轨迹无论是作为上下文内演示还是监督训练数据,均能有效提升CUAs性能。在具有挑战性的OSWorld基准测试中,利用W&L提取的UI轨迹持续增强了通用框架及最先进框架的上下文内表现,并在监督训练下为开源模型带来了更显著的性能提升。这些成果凸显了网络规模的人类演示视频作为推动CUAs迈向实际部署的实用且可扩展基础的重要性。
English
Computer use agents (CUAs) need to plan task workflows grounded in diverse,
ever-changing applications and environments, but learning is hindered by the
scarcity of large-scale, high-quality training data in the target application.
Existing datasets are domain-specific, static, and costly to annotate, while
current synthetic data generation methods often yield simplistic or misaligned
task demonstrations. To address these limitations, we introduce Watch & Learn
(W&L), a framework that converts human demonstration videos readily available
on the Internet into executable UI trajectories at scale. Instead of directly
generating trajectories or relying on ad hoc reasoning heuristics, we cast the
problem as an inverse dynamics objective: predicting the user's action from
consecutive screen states. This formulation reduces manual engineering, is
easier to learn, and generalizes more robustly across applications. Concretely,
we develop an inverse dynamics labeling pipeline with task-aware video
retrieval, generate over 53k high-quality trajectories from raw web videos, and
demonstrate that these trajectories improve CUAs both as in-context
demonstrations and as supervised training data. On the challenging OSWorld
benchmark, UI trajectories extracted with W&L consistently enhance both
general-purpose and state-of-the-art frameworks in-context, and deliver
stronger gains for open-source models under supervised training. These results
highlight web-scale human demonstration videos as a practical and scalable
foundation for advancing CUAs towards real-world deployment.