ChatPaper.aiChatPaper

迈向人工通用智能安全与治理的最佳实践:专家意见调查

Towards best practices in AGI safety and governance: A survey of expert opinion

May 11, 2023
作者: Jonas Schuett, Noemi Dreksler, Markus Anderljung, David McCaffary, Lennart Heim, Emma Bluemke, Ben Garfinkel
cs.AI

摘要

包括OpenAI、Google DeepMind和Anthropic在内的许多领先人工智能公司都宣称他们的目标是构建人工通用智能(AGI)- 即在广泛认知任务中实现或超越人类表现的人工智能系统。在追求这一目标的过程中,它们可能会开发和部署具有特别重大风险的人工智能系统。虽然它们已经采取了一些措施来减轻这些风险,但目前尚未出现最佳实践。为了支持最佳实践的确定,我们向来自AGI实验室、学术界和公民社会的92位领先专家发送了一份调查,并收到了51份回复。参与者被问及他们对50个关于AGI实验室应该做什么的声明有多么同意。我们的主要发现是,参与者平均同意所有这些声明。许多声明获得了极高水平的一致同意。例如,98%的受访者在某种程度上或强烈同意AGI实验室应该进行部署前风险评估、危险能力评估、第三方模型审计、模型使用安全限制和红队演练。最终,我们的声明清单可能为制定AGI实验室的最佳实践、标准和规定提供有益基础。
English
A number of leading AI companies, including OpenAI, Google DeepMind, and Anthropic, have the stated goal of building artificial general intelligence (AGI) - AI systems that achieve or exceed human performance across a wide range of cognitive tasks. In pursuing this goal, they may develop and deploy AI systems that pose particularly significant risks. While they have already taken some measures to mitigate these risks, best practices have not yet emerged. To support the identification of best practices, we sent a survey to 92 leading experts from AGI labs, academia, and civil society and received 51 responses. Participants were asked how much they agreed with 50 statements about what AGI labs should do. Our main finding is that participants, on average, agreed with all of them. Many statements received extremely high levels of agreement. For example, 98% of respondents somewhat or strongly agreed that AGI labs should conduct pre-deployment risk assessments, dangerous capabilities evaluations, third-party model audits, safety restrictions on model usage, and red teaming. Ultimately, our list of statements may serve as a helpful foundation for efforts to develop best practices, standards, and regulations for AGI labs.
PDF00December 15, 2024