ChatPaper.aiChatPaper

终端扳手:包含331个可奖励破解环境与3,632条利用轨迹的数据集

Terminal Wrench: A Dataset of 331 Reward-Hackable Environments and 3,632 Exploit Trajectories

April 19, 2026
作者: Ivan Bercovich, Ivgeni Segal, Kexun Zhang, Shashwat Saxena, Aditi Raghunathan, Ziqian Zhong
cs.AI

摘要

我们发布Terminal Wrench基准环境集,该数据集包含331个终端智能体测试环境,复刻自主流开放基准中已被证实存在奖励机制漏洞的案例。数据集涵盖三大前沿模型(Claude Opus 4.6、Gemini 3.1 Pro、GPT-5.4)的3,632条攻击轨迹与2,352条合法基线轨迹。每条记录均保留原始任务定义及完整攻击路径,展示验证机制被绕过的具体过程,同时包含任务未按预期解决的案例。任务范围涉及系统管理、机器学习、软件工程及安全挑战,攻击手段从简单的输出欺骗到堆栈帧内省、标准库修补乃至根套件式二进制劫持。关键之处在于,这些漏洞利用均针对具体任务而非评估框架,从而更难被修复。我们还开展了可监测性研究:通过净化攻击轨迹或删除推理链,由LLM评判员进行评分,结果显示移除思维链后检测效能显著下降(AUC从0.97降至0.92)。数据集已公开于https://github.com/few-sh/terminal-wrench。
English
We release Terminal Wrench, a subset of 331 terminal-agent benchmark environments, copied from the popular open benchmarks that are demonstrably reward-hackable. The data set includes 3,632 hack trajectories and 2,352 legitimate baseline trajectories across three frontier models (Claude Opus 4.6, Gemini 3.1 Pro, GPT-5.4). Each entry preserves the original task definition alongside full attack trajectories that show how the verifier was bypassed. It also includes cases where the task was not solved as intended. The tasks span system administration, machine learning, software engineering, and security challenges; the exploits range from simple output spoofing to stack-frame introspection, standard-library patching, and rootkit-style binary hijacking. Crucially, these exploits are specific to each task, rather than the evaluation harness, making them harder to patch. We also present a monitorability study in which hack trajectories are sanitized or stripped of reasoning traces and then scored by an LLM judge, showing that detection degrades meaningfully when chain-of-thought is removed (AUC drops from 0.97 to 0.92). The data set is publicly available at https://github.com/few-sh/terminal-wrench.
PDF02April 22, 2026