先进人工智能国际机构
International Institutions for Advanced AI
July 10, 2023
作者: Lewis Ho, Joslyn Barnhart, Robert Trager, Yoshua Bengio, Miles Brundage, Allison Carnegie, Rumman Chowdhury, Allan Dafoe, Gillian Hadfield, Margaret Levi, Duncan Snidal
cs.AI
摘要
国际机构可能在确保先进人工智能系统造福人类方面发挥重要作用。国际合作可以释放人工智能进一步促进可持续发展的能力,协调监管工作可以减少创新障碍和利益传播的障碍。相反,强大通用人工智能系统的潜在危险能力在其开发和部署中产生全球外部性,国际努力进一步推动负责任的人工智能实践可能有助于管理它们带来的风险。本文确定了一系列可在国际层面开展的治理功能,以应对这些挑战,从支持获取尖端人工智能系统到制定国际安全标准。它将这些功能分为四种机构模型,展示了内部协同效应,并在现有组织中具有先例:1) 一个尖端人工智能委员会,促进专家就先进人工智能的机遇和风险达成共识,2) 一个先进人工智能治理组织,制定国际标准以管理来自先进模型的全球威胁,支持其实施,并可能监督未来治理体制的合规性,3) 一个尖端人工智能合作组织,促进获取尖端人工智能,以及4) 一个人工智能安全项目,汇集领先的研究人员和工程师,推动人工智能安全研究。我们探讨了这些模型的实用性,并确定了关于它们可行性的未决问题。
English
International institutions may have an important role to play in ensuring
advanced AI systems benefit humanity. International collaborations can unlock
AI's ability to further sustainable development, and coordination of regulatory
efforts can reduce obstacles to innovation and the spread of benefits.
Conversely, the potential dangerous capabilities of powerful and
general-purpose AI systems create global externalities in their development and
deployment, and international efforts to further responsible AI practices could
help manage the risks they pose. This paper identifies a set of governance
functions that could be performed at an international level to address these
challenges, ranging from supporting access to frontier AI systems to setting
international safety standards. It groups these functions into four
institutional models that exhibit internal synergies and have precedents in
existing organizations: 1) a Commission on Frontier AI that facilitates expert
consensus on opportunities and risks from advanced AI, 2) an Advanced AI
Governance Organization that sets international standards to manage global
threats from advanced models, supports their implementation, and possibly
monitors compliance with a future governance regime, 3) a Frontier AI
Collaborative that promotes access to cutting-edge AI, and 4) an AI Safety
Project that brings together leading researchers and engineers to further AI
safety research. We explore the utility of these models and identify open
questions about their viability.