VideoGUI:從教學影片中自動化GUI的基準測試
VideoGUI: A Benchmark for GUI Automation from Instructional Videos
June 14, 2024
作者: Kevin Qinghong Lin, Linjie Li, Difei Gao, Qinchen WU, Mingyi Yan, Zhengyuan Yang, Lijuan Wang, Mike Zheng Shou
cs.AI
摘要
圖形使用者介面(GUI)自動化對提升人類生產力具有重要潛力,可輔助電腦任務。現有任務制定主要集中在可以透過單一語言指令(例如“插入新投影片”)來指定的簡單任務上。在這項研究中,我們介紹了VideoGUI,這是一個新穎的多模態基準,旨在評估視覺中心的GUI任務上的GUI助手。從高質量的網絡教學視頻中獲取,我們的基準著重於涉及專業和新穎軟件(例如Adobe Photoshop或Stable Diffusion WebUI)以及複雜活動(例如視頻編輯)的任務。VideoGUI通過分層過程評估GUI助手,允許識別它們可能失敗的具體層次:(i)高層級規劃:從視覺條件中重建程序性子任務,而無需語言描述;(ii)中層級規劃:根據視覺狀態(即截圖)和目標生成精確動作敘述的序列;(iii)原子動作執行:執行特定動作,如準確點擊指定元素。對於每個層次,我們設計了跨個別維度的評估指標,以提供清晰的信號,例如在原子動作執行中點擊、拖動、打字和滾動的個別表現。我們在VideoGUI上的評估顯示,即使是最先進的大型多模態模型GPT4o在視覺中心的GUI任務上表現不佳,特別是在高層級規劃方面。
English
Graphical User Interface (GUI) automation holds significant promise for
enhancing human productivity by assisting with computer tasks. Existing task
formulations primarily focus on simple tasks that can be specified by a single,
language-only instruction, such as "Insert a new slide." In this work, we
introduce VideoGUI, a novel multi-modal benchmark designed to evaluate GUI
assistants on visual-centric GUI tasks. Sourced from high-quality web
instructional videos, our benchmark focuses on tasks involving professional and
novel software (e.g., Adobe Photoshop or Stable Diffusion WebUI) and complex
activities (e.g., video editing). VideoGUI evaluates GUI assistants through a
hierarchical process, allowing for identification of the specific levels at
which they may fail: (i) high-level planning: reconstruct procedural subtasks
from visual conditions without language descriptions; (ii) middle-level
planning: generate sequences of precise action narrations based on visual state
(i.e., screenshot) and goals; (iii) atomic action execution: perform specific
actions such as accurately clicking designated elements. For each level, we
design evaluation metrics across individual dimensions to provide clear
signals, such as individual performance in clicking, dragging, typing, and
scrolling for atomic action execution. Our evaluation on VideoGUI reveals that
even the SoTA large multimodal model GPT4o performs poorly on visual-centric
GUI tasks, especially for high-level planning.