ChatInject:滥用聊天模板进行大语言模型代理中的提示注入攻击
ChatInject: Abusing Chat Templates for Prompt Injection in LLM Agents
September 26, 2025
作者: Hwan Chang, Yonghyun Jun, Hwanhee Lee
cs.AI
摘要
随着基于大型语言模型(LLM)的代理在外部环境中的广泛应用,新的攻击面也随之产生,为恶意操控提供了可乘之机。其中一大威胁是间接提示注入攻击,即攻击者将恶意指令嵌入外部环境输出中,诱使代理将其解读并执行,仿佛这些指令是合法提示。尽管以往研究主要集中于纯文本注入攻击,我们发现了一个重要但尚未充分探索的漏洞:LLM对结构化聊天模板的依赖及其在具有说服力的多轮对话中易受上下文操控的特性。为此,我们提出了ChatInject攻击,该攻击通过模仿原生聊天模板的格式来嵌入恶意载荷,从而利用模型固有的指令遵循倾向。在此基础上,我们开发了一种基于说服策略的多轮对话变体,通过多轮对话引导代理接受并执行原本可疑的操作。通过对前沿LLM的全面实验,我们得出了三个关键发现:(1) ChatInject的平均攻击成功率显著高于传统提示注入方法,在AgentDojo上从5.18%提升至32.05%,在InjecAgent上从15.13%提升至45.90%,其中多轮对话在InjecAgent上表现尤为突出,平均成功率高达52.33%;(2) 基于聊天模板的载荷在模型间展现出强大的可迁移性,即便面对模板结构未知的闭源LLM,仍能保持有效;(3) 现有的基于提示的防御措施对此类攻击,尤其是多轮对话变体,基本无效。这些发现揭示了当前代理系统中的脆弱性。
English
The growing deployment of large language model (LLM) based agents that
interact with external environments has created new attack surfaces for
adversarial manipulation. One major threat is indirect prompt injection, where
attackers embed malicious instructions in external environment output, causing
agents to interpret and execute them as if they were legitimate prompts. While
previous research has focused primarily on plain-text injection attacks, we
find a significant yet underexplored vulnerability: LLMs' dependence on
structured chat templates and their susceptibility to contextual manipulation
through persuasive multi-turn dialogues. To this end, we introduce ChatInject,
an attack that formats malicious payloads to mimic native chat templates,
thereby exploiting the model's inherent instruction-following tendencies.
Building on this foundation, we develop a persuasion-driven Multi-turn variant
that primes the agent across conversational turns to accept and execute
otherwise suspicious actions. Through comprehensive experiments across frontier
LLMs, we demonstrate three critical findings: (1) ChatInject achieves
significantly higher average attack success rates than traditional prompt
injection methods, improving from 5.18% to 32.05% on AgentDojo and from 15.13%
to 45.90% on InjecAgent, with multi-turn dialogues showing particularly strong
performance at average 52.33% success rate on InjecAgent, (2)
chat-template-based payloads demonstrate strong transferability across models
and remain effective even against closed-source LLMs, despite their unknown
template structures, and (3) existing prompt-based defenses are largely
ineffective against this attack approach, especially against Multi-turn
variants. These findings highlight vulnerabilities in current agent systems.