观而习之:从在线视频中学习计算机操作
Watch and Learn: Learning to Use Computers from Online Videos
October 6, 2025
作者: Chan Hee Song, Yiwen Song, Palash Goyal, Yu Su, Oriana Riva, Hamid Palangi, Tomas Pfister
cs.AI
摘要
计算机使用代理(CUAs)需要在多样化且不断变化的应用环境中规划任务流程,但学习过程因目标应用领域缺乏大规模、高质量的训练数据而受阻。现有数据集局限于特定领域、静态且标注成本高昂,而当前的合成数据生成方法往往产生过于简化或与任务不匹配的演示。为解决这些限制,我们引入了“观察与学习”(W&L)框架,该框架能够将互联网上易得的人类演示视频大规模转化为可执行的用户界面轨迹。不同于直接生成轨迹或依赖临时推理启发式方法,我们将问题转化为逆动力学目标:从连续的屏幕状态预测用户操作。这一表述减少了人工工程需求,更易于学习,并能更稳健地跨应用泛化。具体而言,我们开发了一个包含任务感知视频检索的逆动力学标注流程,从原始网络视频中生成了超过53,000条高质量轨迹,并证明这些轨迹无论是作为上下文演示还是监督训练数据,均能有效提升CUAs性能。在具有挑战性的OSWorld基准测试中,通过W&L提取的用户界面轨迹持续增强了通用框架及最先进框架的上下文表现,并在监督训练下为开源模型带来了更显著的性能提升。这些成果表明,基于网络规模的人类演示视频是推动CUAs迈向实际部署的一个实用且可扩展的基础。
English
Computer use agents (CUAs) need to plan task workflows grounded in diverse,
ever-changing applications and environments, but learning is hindered by the
scarcity of large-scale, high-quality training data in the target application.
Existing datasets are domain-specific, static, and costly to annotate, while
current synthetic data generation methods often yield simplistic or misaligned
task demonstrations. To address these limitations, we introduce Watch & Learn
(W&L), a framework that converts human demonstration videos readily available
on the Internet into executable UI trajectories at scale. Instead of directly
generating trajectories or relying on ad hoc reasoning heuristics, we cast the
problem as an inverse dynamics objective: predicting the user's action from
consecutive screen states. This formulation reduces manual engineering, is
easier to learn, and generalizes more robustly across applications. Concretely,
we develop an inverse dynamics labeling pipeline with task-aware video
retrieval, generate over 53k high-quality trajectories from raw web videos, and
demonstrate that these trajectories improve CUAs both as in-context
demonstrations and as supervised training data. On the challenging OSWorld
benchmark, UI trajectories extracted with W&L consistently enhance both
general-purpose and state-of-the-art frameworks in-context, and deliver
stronger gains for open-source models under supervised training. These results
highlight web-scale human demonstration videos as a practical and scalable
foundation for advancing CUAs towards real-world deployment.