With great power comes great responsibility. Artificial intelligence (AI) holds the potential to transform our world but using it in practice means being mindful of the risks and anxieties around privacy, surveillance, and intentional misuse.
That awareness sits behind the European Union’s recent proposal for the first-ever legal framework for responsible AI, which would see applications from medicine to law enforcement subject to clearly demarcated boundaries, proof of safety and human oversight.
The EU proposal was welcomed by panelists in the AI transatlantic cooperation – importance and opportunities session during Microsoft’s Data Science & Law Forum 3.0, which examined how we can leverage multi-stakeholder partnerships and coalitions as a foundation for building a trustworthy governance framework for AI.
The panel was chaired by Joshua Meltzer, Senior Fellow in the Global Economy and Development program at the Brookings Institution. His fellow panel members were:
- Allison Schwier, Acting Science and Technology Adviser to the U.S. Secretary of State
- Wan Sie Lee, Director of Trusted AI and Data at the Infocomm Media Development Authority (IMDA) in Singapore
- Audrey Plonk, Head of Division, Digital Economy Policy at the Organization for Economic Co-operation and Development (OECD)
- Jayant Narayan, Artificial Intelligence and Machine Learning Lead, World Economic Forum
These are some of the key takeaways from their discussion.
Why does international cooperation on AI matter?
Joshua Meltzer said that multilateral cooperation not only makes the development of a cohesive governance framework for ethical AI more consistent and effective, but also creates significant economic gains through expanded opportunities. Cooperation is also essential, he noted, for driving AI innovation.
Allison Schwier explained the U.S. viewpoint on global cooperation. “The State Department focuses on AI because it could shape everything about our lives, from where we get energy to how we do our jobs to how wars are fought,” she said. “We fundamentally believe the promises of AI will surpass its challenges. We believe scientific and research innovations such as AI will uplift and empower people, provided they’re developed, disseminated and governed in a way that aligns with our shared democratic values and human rights. To achieve that alignment, we have to work together.”
The United States, she said, worked with the OECD to establish the first set of intergovernmental principles for the responsible development of trustworthy AI.
Schwier added that the U.S. also helped to set up the Global Partnership on AI to further practical cooperation on AI projects: “Both initiatives provide us opportunities to move from principle to practice, ensuring that AI is designed and deployed in line with our shared values,” she said.
How do you introduce regulation without stifling innovation?
Singapore has taken a different approach to other countries in monitoring and managing AI. “We believe that there’s a need to balance encouraging AI innovation organizations, as well as building public trust in technology,” said Wan Sie Lee. “So, at this point, we are not pro-regulation just yet.”
Instead, Singapore encourages voluntary adoption of responsible AI practices by industry. Although Lee described the EU proposals as “well thought-through,” she noted there were some challenges in introducing regulation that Singapore was still thinking about.
For example, “When we talk about conformity assessments, how do you go about doing that? That’s something that we think is important to get sorted, and it’s something that I think we are looking into in Singapore, to see what we can do in this area, and then to the subsequent enforcement, because you need some teeth, or some bite in order to implement the regulation.”
Is global interoperability achievable?
Jayant Narayan said that, overall, the appetite for cooperation was strong.
“What we are noticing through our work at the [World Economic] Forum is that there is a lot more willingness to collaborate, and governments are actually actively asking for it,” he said.
“I think we do see a lot of overlap in how different countries are thinking about this – thinking about regulating AI,” added Audrey Plonk, pointing out that a number of nations had made policy changes mapped to the OECD’s AI principles, which also offer guidance on responsible AI at a national and international level.
“If you think of regulation more as a spectrum… we do see some direct tracking to the AI principles, which I think is exactly what we would expect to see two years into the adoption of that instrument, where countries are really trying to take the principles and apply them in policy and regulatory practice.”