ChatPaper.aiChatPaper

ShowUI-π:基於流模型的生成式介面靈巧操控框架

ShowUI-π: Flow-based Generative Models as GUI Dexterous Hands

December 31, 2025
作者: Siyuan Hu, Kevin Qinghong Lin, Mike Zheng Shou
cs.AI

摘要

建構能夠實現靈巧操作的智慧代理,是實現類人自動化的關鍵技術,無論在機器人學或數位環境中皆然。然而,現有的圖形使用者介面代理僅依賴離散的點擊座標預測(x,y),這限制了需要連續即時感知與調整的自由形式閉環軌跡操作(例如拖動進度條)。本研究開發了ShowUI-π——首個基於流模型的GUI靈巧操作手,具備以下創新設計:(一)統一離散-連續動作架構,將離散點擊與連續拖拽整合於共享模型中,實現跨互動模式的靈活適應;(二)基於流模型的拖拽動作生成,透過輕量級動作專家模組從連續視覺觀測預測游標增量調整,確保軌跡平滑穩定;(三)拖拽訓練數據與基準測試,我們手動收集並合成了涵蓋五大領域(如PowerPoint、Adobe Premiere Pro)的2萬條拖拽軌跡,並推出ScreenDrag基準,包含完整的線上與線下評估方案以檢驗GUI代理的拖拽能力。實驗顯示,現有商用GUI代理在ScreenDrag上表現仍顯不足(如Operator得分13.27,最佳表現的Gemini-2.5-CUA僅達22.18),而ShowUI-π僅以4.5億參數即達成26.98分,既凸顯任務難度亦驗證本方法的有效性。我們期待此研究能推動GUI代理在數位世界中實現類人級靈巧操控。程式碼已開源於:https://github.com/showlab/showui-pi。
English
Building intelligent agents capable of dexterous manipulation is essential for achieving human-like automation in both robotics and digital environments. However, existing GUI agents rely on discrete click predictions (x,y), which prohibits free-form, closed-loop trajectories (e.g. dragging a progress bar) that require continuous, on-the-fly perception and adjustment. In this work, we develop ShowUI-π, the first flow-based generative model as GUI dexterous hand, featuring the following designs: (i) Unified Discrete-Continuous Actions, integrating discrete clicks and continuous drags within a shared model, enabling flexible adaptation across diverse interaction modes; (ii) Flow-based Action Generation for drag modeling, which predicts incremental cursor adjustments from continuous visual observations via a lightweight action expert, ensuring smooth and stable trajectories; (iii) Drag Training data and Benchmark, where we manually collect and synthesize 20K drag trajectories across five domains (e.g. PowerPoint, Adobe Premiere Pro), and introduce ScreenDrag, a benchmark with comprehensive online and offline evaluation protocols for assessing GUI agents' drag capabilities. Our experiments show that proprietary GUI agents still struggle on ScreenDrag (e.g. Operator scores 13.27, and the best Gemini-2.5-CUA reaches 22.18). In contrast, ShowUI-π achieves 26.98 with only 450M parameters, underscoring both the difficulty of the task and the effectiveness of our approach. We hope this work advances GUI agents toward human-like dexterous control in digital world. The code is available at https://github.com/showlab/showui-pi.
PDF341January 15, 2026