ChatPaper.aiChatPaper

当行动偏离目标:检测与修正计算机使用代理中的行为失准

When Actions Go Off-Task: Detecting and Correcting Misaligned Actions in Computer-Use Agents

February 9, 2026
作者: Yuting Ning, Jaylen Jones, Zhehao Zhang, Chentao Ye, Weitong Ruan, Junyi Li, Rahul Gupta, Huan Sun
cs.AI

摘要

过去一年间,计算机使用代理(CUA)取得了巨大进展,但其生成的操作仍时常偏离用户原始意图。这类失准操作可能源于外部攻击(如间接提示注入)或内部局限(如错误推理),不仅使CUA面临安全风险,还会降低任务效率与可靠性。本研究首次对CUA中的失准操作检测进行系统性定义与探索,全面涵盖外部诱发与内部产生的失准操作。我们进一步识别了现实场景中CUA部署的三大常见类别,并构建MisActBench——一个包含人工标注、操作级对齐标签的真实轨迹基准。此外,我们提出DeAction这一实用型通用防护机制,可在操作执行前检测失准行为,并通过结构化反馈进行迭代修正。DeAction在离线与在线评估中均超越现有基线方法,且仅带来适度延迟开销:(1)在MisActBench上,其F1分数绝对值领先基线超过15%;(2)在线评估显示,在对抗环境下能将攻击成功率降低90%以上,同时在良性环境中保持甚至提升任务成功率。
English
Computer-use agents (CUAs) have made tremendous progress in the past year, yet they still frequently produce misaligned actions that deviate from the user's original intent. Such misaligned actions may arise from external attacks (e.g., indirect prompt injection) or from internal limitations (e.g., erroneous reasoning). They not only expose CUAs to safety risks, but also degrade task efficiency and reliability. This work makes the first effort to define and study misaligned action detection in CUAs, with comprehensive coverage of both externally induced and internally arising misaligned actions. We further identify three common categories in real-world CUA deployment and construct MisActBench, a benchmark of realistic trajectories with human-annotated, action-level alignment labels. Moreover, we propose DeAction, a practical and universal guardrail that detects misaligned actions before execution and iteratively corrects them through structured feedback. DeAction outperforms all existing baselines across offline and online evaluations with moderate latency overhead: (1) On MisActBench, it outperforms baselines by over 15% absolute in F1 score; (2) In online evaluation, it reduces attack success rate by over 90% under adversarial settings while preserving or even improving task success rate in benign environments.
PDF21February 13, 2026