ChatPaper.aiChatPaper

Hackphyr:用於網絡安全環境的本地微調LLM代理

Hackphyr: A Local Fine-Tuned LLM Agent for Network Security Environments

September 17, 2024
作者: Maria Rigaki, Carlos Catania, Sebastian Garcia
cs.AI

摘要

大型語言模型(LLMs)展現出在各個領域,包括網絡安全,具有顯著潛力。使用商業雲端的LLMs可能不理想,因為涉及隱私擔憂、成本和網絡連接限制。本文介紹了Hackphyr,一個在網絡安全環境中作為紅隊代理的本地微調LLM。我們微調的70億參數模型可以在單個GPU卡上運行,並實現與較大且更強大的商業模型(如GPT-4)相當的性能。Hackphyr明顯優於其他模型,包括GPT-3.5-turbo和基準模型,如Q學習代理在複雜、以前未見過的情境中。為了達到這種性能,我們生成了一個新的特定於安全性的數據集,以增強基礎模型的能力。最後,我們對代理的行為進行了全面分析,提供了對這些代理的規劃能力和潛在缺陷的見解,有助於更廣泛地理解基於LLM的代理在網絡安全情境中的應用。
English
Large Language Models (LLMs) have shown remarkable potential across various domains, including cybersecurity. Using commercial cloud-based LLMs may be undesirable due to privacy concerns, costs, and network connectivity constraints. In this paper, we present Hackphyr, a locally fine-tuned LLM to be used as a red-team agent within network security environments. Our fine-tuned 7 billion parameter model can run on a single GPU card and achieves performance comparable with much larger and more powerful commercial models such as GPT-4. Hackphyr clearly outperforms other models, including GPT-3.5-turbo, and baselines, such as Q-learning agents in complex, previously unseen scenarios. To achieve this performance, we generated a new task-specific cybersecurity dataset to enhance the base model's capabilities. Finally, we conducted a comprehensive analysis of the agents' behaviors that provides insights into the planning abilities and potential shortcomings of such agents, contributing to the broader understanding of LLM-based agents in cybersecurity contexts

Summary

AI-Generated Summary

PDF82November 16, 2024