ChatPaper.aiChatPaper

利用新型 GPT-4 API

Exploiting Novel GPT-4 APIs

December 21, 2023
作者: Kellin Pelrine, Mohammad Taufeeque, Michał Zając, Euan McLean, Adam Gleave
cs.AI

摘要

語言模型攻擊通常假設兩種極端的威脅模型之一:完全白盒訪問模型權重,或者僅限於文本生成 API 的黑盒訪問。然而,現實世界中的 API 往往比僅限於文本生成更具靈活性:這些 API 提供“灰盒”訪問,導致新的威脅向量。為了探索這一點,我們對 GPT-4 API 中公開的三個新功能進行了紅隊測試:微調、函數調用和知識檢索。我們發現,對模型進行微調,即使是在 15 個有害示例或 100 個良性示例的情況下,都可以從 GPT-4 中去除核心保護措施,從而實現一系列有害輸出。此外,我們發現 GPT-4 助理很容易洩露函數調用架構,並可以執行任意函數調用。最後,我們發現知識檢索可以被劫持,通過向檢索文檔中注入指令。這些漏洞凸顯了 API 提供的功能擴展可能會產生新的漏洞。
English
Language model attacks typically assume one of two extreme threat models: full white-box access to model weights, or black-box access limited to a text generation API. However, real-world APIs are often more flexible than just text generation: these APIs expose ``gray-box'' access leading to new threat vectors. To explore this, we red-team three new functionalities exposed in the GPT-4 APIs: fine-tuning, function calling and knowledge retrieval. We find that fine-tuning a model on as few as 15 harmful examples or 100 benign examples can remove core safeguards from GPT-4, enabling a range of harmful outputs. Furthermore, we find that GPT-4 Assistants readily divulge the function call schema and can be made to execute arbitrary function calls. Finally, we find that knowledge retrieval can be hijacked by injecting instructions into retrieval documents. These vulnerabilities highlight that any additions to the functionality exposed by an API can create new vulnerabilities.
PDF147December 15, 2024