终端扳手:一个包含331个可奖励破解环境与3,632条攻击轨迹的数据集
Terminal Wrench: A Dataset of 331 Reward-Hackable Environments and 3,632 Exploit Trajectories
April 19, 2026
作者: Ivan Bercovich, Ivgeni Segal, Kexun Zhang, Shashwat Saxena, Aditi Raghunathan, Ziqian Zhong
cs.AI
摘要
我们正式发布Terminal Wrench数据集——包含331个终端智能体基准环境子集,这些环境复制自主流开放基准测试平台,且被证实存在奖励机制漏洞。该数据集涵盖三大前沿模型(Claude Opus 4.6、Gemini 3.1 Pro、GPT-5.4)的3,632条攻击轨迹与2,352条合法基线轨迹。每个条目均保留原始任务定义及完整的攻击轨迹,清晰展示验证器被绕过的过程,同时包含未按预期完成的任务案例。任务范围涉及系统管理、机器学习、软件工程和安全挑战;攻击手段从简单的输出欺骗到堆栈帧内省、标准库修补乃至根套件式二进制劫持。关键之处在于,这些攻击手段均针对具体任务而非评估框架本身,使得修补难度显著增加。我们还开展了一项可监测性研究:通过清理攻击轨迹或移除推理痕迹后交由LLM评判器打分,结果显示当移除思维链时检测效能明显下降(AUC从0.97降至0.92)。数据集已公开发布于https://github.com/few-sh/terminal-wrench。
English
We release Terminal Wrench, a subset of 331 terminal-agent benchmark environments, copied from the popular open benchmarks that are demonstrably reward-hackable. The data set includes 3,632 hack trajectories and 2,352 legitimate baseline trajectories across three frontier models (Claude Opus 4.6, Gemini 3.1 Pro, GPT-5.4). Each entry preserves the original task definition alongside full attack trajectories that show how the verifier was bypassed. It also includes cases where the task was not solved as intended. The tasks span system administration, machine learning, software engineering, and security challenges; the exploits range from simple output spoofing to stack-frame introspection, standard-library patching, and rootkit-style binary hijacking. Crucially, these exploits are specific to each task, rather than the evaluation harness, making them harder to patch. We also present a monitorability study in which hack trajectories are sanitized or stripped of reasoning traces and then scored by an LLM judge, showing that detection degrades meaningfully when chain-of-thought is removed (AUC drops from 0.97 to 0.92). The data set is publicly available at https://github.com/few-sh/terminal-wrench.