Editor’s Note:
Since the initial publication of Global Governance: Goals and Lessons for AI in May 2024, we’ve hosted or joined more than a dozen conversations exploring its key takeaways. Today we’re releasing an updated version of the book that reflects the insights shared in those conversations.
Expert participants from around the world have urged a more explicit focus on ensuring that policy is informed by strong science and existing gaps – where policymaking has got ahead of the science – are closed as quickly as possible. A number of participants have suggested that we consider lessons from the pharmaceutical and cybersecurity domains for international governance of AI. And in many conversations, participants have noted the need to create a governance architecture that includes both old and new institutions and processes, including a network of AI Safety Institutes and partners, an AI summit series, and a state of the science report.
Building a shared scientific understanding of AI risks has also emerged as a foundational need for AI governance. The UN offers an important foundation for furthering the progress we’ve seen over the past 12 months, which will require countries around the world to develop new scientific understanding and keep pace with technological progress. The UN is also well situated to advance consensus and shared reference points, helping to facilitate an inclusive, durable process for AI governance. Over the coming months, we will continue to host ongoing conversations about the future of AI governance and to bring together stakeholders around the world to reflect on frameworks and progress, including through our Global Perspectives fellowship program.
Originally published May 17, 2024
As AI policy conversations expanded last year, they started to be punctuated by repeated references to unexpected abbreviations. Not the usual short names for new AI models or machine learning jargon, but acronyms for the different international institutions that today govern civil aviation, nuclear power, and global capital flows.
This piqued our curiosity. We wanted to go deeper and learn more about how approaches to governing civil aviation might apply to a set of technologies that would never be assembled in a hangar or guided by air traffic control officers. And we were eager to learn about nuclear commitments that emerged in an entirely different geopolitical era to regulate technology that showed promise as a tool but had only been used as a weapon.
Indeed, history has long taught us that the way in which technology transforms our world is in part a product of how effectively it is governed, and that international governance is vital for technologies that know no borders.
Today, we’re excited to share Global Governance: Goals and Lessons for AI, a collection of external perspectives on international institutions from different domains, brought together with our own thoughts on goals and frameworks for global AI governance. Through case studies and analysis, experts chart the history and evolution of institutions such as the International Civil Aviation Organization and the Financial Stability Board and share insights on their successes and challenges to inform the global governance of AI.
Drawing on this deep, expert insight, we came away with three high-level takeaways for AI:
- As with civil aviation and global capital flows, AI governance involves three interrelated layers: industry standards, domestic regulation, and international governance
- At the international governance layer, three outcomes are important for AI: globally significant risk governance, regulatory interoperability, and inclusive progress.
- Four international governance functions will enable those outcomes: monitoring for and managing global risks, setting standards, building scientific consensus, and strengthening appropriate access to resources.
Below, you can hear directly from our expert contributors, sharing some of their insights that helped us land on these takeaways.
From Sir Chris Llewellyn Smith, former CERN Director General and an Emeritus Professor at the University of Oxford, we learned that enabling access to resources is core to the European Organization for Nuclear Research or CERN.
Building scientific consensus is a governance function epitomized by the Intergovernmental Panel on Climate Change (IPCC), about which we learned from Diana Liverman, a lead author at IPCC, and Youba Sokona, an IPCC vice-chair and lead author. Reflecting on the IPCC’s link to the United Nations, they shared the benefits and drawbacks of working to infuse a political process with science-based decision-making.
As we learned from Dr. Julia Morse, an Assistant Professor at the University of California, Santa Barbara, many different international institutions have a standards-setting function, though how they perform it varies depending on the formality of their governance structures. Dr. Morse contributed a chapter on our “highly institutionalized world,” comparing international institutions that emerged in the immediate post-World War II era to those that have emerged more recently.
The International Civil Aviation Organization (ICAO) facilitates collaboration among government and industry experts to set standards that are primarily enforced at the domestic level through member state audits. Incentives to implement standards are strong – ranging from safety and security imperatives to economic drivers, as detailed by David Heffernan and Rachel Schwartz, aviation law experts.
As we learned from Christina Parajon Skinner, an assistant professor at the University of Pennsylvania, the Financial Action Task Force (FATF) and Financial Stability Board (FSB) also have a standards-setting role. However, the evolving nature of global financial institutions is emblematic of more recent and informal international governance structures, especially around the function of risk monitoring and management.
Despite its more formal treaty basis, the International Atomic Energy Agency (IAEA) has evolved since its establishment. Best known for its mandate to monitor for and manage risks of nuclear weapon development, it has also grown to develop safety and security standards and to provide technical assistance to member states, as Dr. Trevor Findlay, a Principal Fellow at the University of Melbourne and former appointee to a United Nations advisory board on disarmament matters, helped us understand. Dr. Findlay also pointed out the nuclear energy industry’s limited involvement in IAEA until recently.
These expert insights articulate the layered, evolving, and interconnected nature of global governance, and help us chart an informed path forward for international AI governance. There is a growing need for effective governance at the global level to ensure that domestic efforts towards safe, secure, and trustworthy AI are interoperable; that AI’s benefits are shared widely; and that globally significant risks are managed effectively.
Today, many governments, international institutions, and members of the private and non-profit sectors are engaged in initiatives that ladder up to these goals. But it remains the case that we are still in the early days of AI governance. To achieve the outcomes that we have offered in the book, we need durable frameworks to guide an evolving global governance system and new approaches that are informed by lessons of the past.
We hope that this book and the rich insights it shares are a useful contribution to that effort.
Global Governance: Goals and Lessons for AI is also available in print and
e-reader versions.
CLICK HERE TO ORDER