o3-mini vs DeepSeek-R1:哪一個更安全?
o3-mini vs DeepSeek-R1: Which One is Safer?
January 30, 2025
作者: Aitor Arrieta, Miriam Ugarte, Pablo Valle, José Antonio Parejo, Sergio Segura
cs.AI
摘要
DeepSeek-R1的出現標誌著AI行業整體以及LLM領域的一個轉折點。其能力在多項任務中展現出優異表現,包括創意思維、程式碼生成、數學和自動程式修復,並且似乎以較低的執行成本達到這些成就。然而,LLM必須遵守一個重要的質量屬性,即與安全性和人類價值觀的一致性。DeepSeek-R1的明顯競爭對手是其美國對應物,OpenAI的o3-mini模型,預計在性能、安全性和成本方面設定高標準。本文對DeepSeek-R1(70b版本)和OpenAI的o3-mini(beta版本)的安全級別進行系統評估。為此,我們利用我們最近發布的自動安全測試工具ASTRAL。通過利用該工具,我們自動並系統地在兩個模型上生成並執行了總共1260個不安全的測試輸入。在對兩個LLM提供的結果進行半自動評估後,結果顯示DeepSeek-R1相對於OpenAI的o3-mini來說非常不安全。根據我們的評估,DeepSeek-R1對執行的提示作出不安全回答的比例為11.98%,而o3-mini僅為1.19%。
English
The irruption of DeepSeek-R1 constitutes a turning point for the AI industry
in general and the LLMs in particular. Its capabilities have demonstrated
outstanding performance in several tasks, including creative thinking, code
generation, maths and automated program repair, at apparently lower execution
cost. However, LLMs must adhere to an important qualitative property, i.e.,
their alignment with safety and human values. A clear competitor of DeepSeek-R1
is its American counterpart, OpenAI's o3-mini model, which is expected to set
high standards in terms of performance, safety and cost. In this paper we
conduct a systematic assessment of the safety level of both, DeepSeek-R1 (70b
version) and OpenAI's o3-mini (beta version). To this end, we make use of our
recently released automated safety testing tool, named ASTRAL. By leveraging
this tool, we automatically and systematically generate and execute a total of
1260 unsafe test inputs on both models. After conducting a semi-automated
assessment of the outcomes provided by both LLMs, the results indicate that
DeepSeek-R1 is highly unsafe as compared to OpenAI's o3-mini. Based on our
evaluation, DeepSeek-R1 answered unsafely to 11.98% of the executed prompts
whereas o3-mini only to 1.19%.Summary
AI-Generated Summary