先進人工智慧國際機構
International Institutions for Advanced AI
July 10, 2023
作者: Lewis Ho, Joslyn Barnhart, Robert Trager, Yoshua Bengio, Miles Brundage, Allison Carnegie, Rumman Chowdhury, Allan Dafoe, Gillian Hadfield, Margaret Levi, Duncan Snidal
cs.AI
摘要
國際機構可能在確保先進人工智慧系統造福人類方面發揮重要作用。國際合作可以激發人工智慧進一步促進可持續發展的能力,調整監管努力可以減少創新障礙和利益擴散。相反,功能強大且通用的人工智慧系統具有潛在危險能力,在其發展和部署中產生全球外部性,國際努力促進負責任的人工智慧實踐可能有助於管理其帶來的風險。本文確定了一組治理功能,可以在國際層面執行,以應對這些挑戰,從支持接觸前沿人工智慧系統到設定國際安全標準。它將這些功能分為四種機構模型,展示內部協同作用並在現有組織中具有先例:1)前沿人工智慧委員會,促進專家對先進人工智慧的機會和風險達成共識,2)高級人工智慧治理組織,設定國際標準以應對來自先進模型的全球威脅,支持其實施,可能監督未來治理體制的遵循,3)前沿人工智慧合作組織,促進接觸尖端人工智慧,以及4)人工智慧安全項目,匯集領先的研究人員和工程師進一步推進人工智慧安全研究。我們探討這些模型的效用並確定與其可行性有關的問題。
English
International institutions may have an important role to play in ensuring
advanced AI systems benefit humanity. International collaborations can unlock
AI's ability to further sustainable development, and coordination of regulatory
efforts can reduce obstacles to innovation and the spread of benefits.
Conversely, the potential dangerous capabilities of powerful and
general-purpose AI systems create global externalities in their development and
deployment, and international efforts to further responsible AI practices could
help manage the risks they pose. This paper identifies a set of governance
functions that could be performed at an international level to address these
challenges, ranging from supporting access to frontier AI systems to setting
international safety standards. It groups these functions into four
institutional models that exhibit internal synergies and have precedents in
existing organizations: 1) a Commission on Frontier AI that facilitates expert
consensus on opportunities and risks from advanced AI, 2) an Advanced AI
Governance Organization that sets international standards to manage global
threats from advanced models, supports their implementation, and possibly
monitors compliance with a future governance regime, 3) a Frontier AI
Collaborative that promotes access to cutting-edge AI, and 4) an AI Safety
Project that brings together leading researchers and engineers to further AI
safety research. We explore the utility of these models and identify open
questions about their viability.