ChatPaper.aiChatPaper

Phi-Ground技术报告:推进图形用户界面感知的基石研究

Phi-Ground Tech Report: Advancing Perception in GUI Grounding

July 31, 2025
作者: Miaosen Zhang, Ziqiang Xu, Jialiang Zhu, Qi Dai, Kai Qiu, Yifan Yang, Chong Luo, Tianyi Chen, Justin Wagle, Tim Franklin, Baining Guo
cs.AI

摘要

随着多模态推理模型的发展,类似于《钢铁侠》中贾维斯的计算机使用代理(CUAs)正逐渐成为现实。图形用户界面(GUI)基础是CUAs执行实际动作的核心组件,类似于机器人技术中的机械控制,它直接决定了系统的成败。它决定了诸如点击和键入等动作,以及点击坐标等相关参数。当前端到端的基础模型在ScreenSpot-pro和UI-Vision等具有挑战性的基准测试中仍无法达到65%的准确率,表明它们远未达到部署准备状态。在本研究中,我们对基础模型的训练进行了实证研究,从数据收集到模型训练的细节进行了全面考察。最终,我们开发了Phi-Ground模型系列,在代理设置下,对于参数少于10B的模型,在所有五个基础基准测试中均实现了最先进的性能。在端到端模型设置中,我们的模型在ScreenSpot-pro和UI-Vision上分别以\textbf{43.2}和\textbf{27.2}的得分仍保持了SOTA结果。我们相信,本文讨论的各种细节,以及我们的成功与失败,不仅阐明了基础模型的构建,也将对其他感知任务有所裨益。项目主页:https://zhangmiaosen2000.github.io/Phi-Ground/{https://zhangmiaosen2000.github.io/Phi-Ground/}
English
With the development of multimodal reasoning models, Computer Use Agents (CUAs), akin to Jarvis from "Iron Man", are becoming a reality. GUI grounding is a core component for CUAs to execute actual actions, similar to mechanical control in robotics, and it directly leads to the success or failure of the system. It determines actions such as clicking and typing, as well as related parameters like the coordinates for clicks. Current end-to-end grounding models still achieve less than 65\% accuracy on challenging benchmarks like ScreenSpot-pro and UI-Vision, indicating they are far from being ready for deployment. % , as a single misclick can result in unacceptable consequences. In this work, we conduct an empirical study on the training of grounding models, examining details from data collection to model training. Ultimately, we developed the Phi-Ground model family, which achieves state-of-the-art performance across all five grounding benchmarks for models under 10B parameters in agent settings. In the end-to-end model setting, our model still achieves SOTA results with scores of \textbf{43.2} on ScreenSpot-pro and \textbf{27.2} on UI-Vision. We believe that the various details discussed in this paper, along with our successes and failures, not only clarify the construction of grounding models but also benefit other perception tasks. Project homepage: https://zhangmiaosen2000.github.io/Phi-Ground/{https://zhangmiaosen2000.github.io/Phi-Ground/}
PDF322August 1, 2025