ChatPaper.aiChatPaper

窃取生产语言模型的一部分

Stealing Part of a Production Language Model

March 11, 2024
作者: Nicholas Carlini, Daniel Paleka, Krishnamurthy Dj Dvijotham, Thomas Steinke, Jonathan Hayase, A. Feder Cooper, Katherine Lee, Matthew Jagielski, Milad Nasr, Arthur Conmy, Eric Wallace, David Rolnick, Florian Tramèr
cs.AI

摘要

我们介绍了首个模型窃取攻击,可以从黑盒生产语言模型(如OpenAI的ChatGPT或Google的PaLM-2)中提取精确且非平凡的信息。具体来说,我们的攻击可以在典型API访问的情况下恢复变压器模型的嵌入投影层(考虑对称性)。在不到20美元的成本下,我们的攻击可以提取OpenAI的Ada和Babbage语言模型的整个投影矩阵。因此,我们首次确认这些黑盒模型分别具有隐藏维度为1024和2048。我们还恢复了gpt-3.5-turbo模型的确切隐藏维度大小,并估计恢复整个投影矩阵可能只需不到2,000个查询成本。最后,我们总结了潜在的防御和缓解措施,并讨论了可能延伸我们攻击的未来工作的影响。
English
We introduce the first model-stealing attack that extracts precise, nontrivial information from black-box production language models like OpenAI's ChatGPT or Google's PaLM-2. Specifically, our attack recovers the embedding projection layer (up to symmetries) of a transformer model, given typical API access. For under \20 USD, our attack extracts the entire projection matrix of OpenAI's Ada and Babbage language models. We thereby confirm, for the first time, that these black-box models have a hidden dimension of 1024 and 2048, respectively. We also recover the exact hidden dimension size of the gpt-3.5-turbo model, and estimate it would cost under 2,000 in queries to recover the entire projection matrix. We conclude with potential defenses and mitigations, and discuss the implications of possible future work that could extend our attack.

Summary

AI-Generated Summary

PDF923December 15, 2024