Just days after the European Commission unveiled its proposal for the first ever legal framework on AI, the Commission’s Executive Vice-President Margrethe Vestager shared her views with Microsoft’s Vice President Casper Klynge, as part of the company’s Data Science & Law Forum 3.0. The proposed AI Regulation is the first step in a long legislative process that will evolve over the coming months.
Policymakers around the world are grappling to set rules to shape robust and reliable artificial intelligence without curtailing innovation or stemming potential gains. While AI technologies offer a myriad of opportunities and benefits, they also raise ethical questions and risks to fundamental rights.
“The aim is quite simple,” Vestager said. “Let’s use it more. Let’s have artificial intelligence everywhere where it makes a difference.”
Five key guarantees
By embracing AI and crafting a breeding-ground for human-centric innovation, Europe will be able to fast-track beneficial uses, such as in the green economy and in farming, she said. At the same time, a strong framework for quantifying and mitigating risks will ensure that citizens and companies can have confidence that the technologies will primarily be used for good.
The proposed EU framework revolves around five key guarantees when it comes to high-risk AI systems: feeding in high-quality data to prevent bias, compiling documentation to ensure explainability and compliance, making sure users have enough information to give them confidence, ensuring human oversight in development and in implementation, and abiding by the highest standards of cyber security.
“Trust goes hand in hand with excellence,” she said. “By using artificial intelligence more, because we can trust it, we also aim at making Europe world class when it comes to developing, secure trustworthy human-centered artificial intelligence in the future.”
A global endeavor
While the proposed legislation is for the European Union, Vestager sees benefits in all democracies aligning their approaches, harnessing AI in a way that is built on integrity and respects the dignity of each individual.
The debate has changed since Europe led the way on privacy legislation, she said, with the need to protect citizens and foster innovation being front of mind for politicians across the globe.
“Digital has come into geopolitics as a new and very important feature,” Vestager said. “The way we deal with technology also shows what we expect of our democracy and of our societies.”
Trust sits at the heart of the proposals, with the draft regulations seeking to underscore confidence in what AI has to offer. Solutions that are trustworthy, legally sound and ethical will allow European citizens to embrace the technologies’ most promising aspects, she said.
“Innovation and regulation here actually go hand in hand, they closely follow each other,” Vestager said. “The task here is to make sure that what we do is proportional, risk based, and more than anything creates legal certainty, both for those who develop and for those who use AI.”
Taking a proportional and risk-based approach means putting the risk to life and the risk to fundamental values at the core of each assessment. AI is put into four categories, from the least risky to the most, which Vestager encouraged the audience to visualize using a pyramid, where the base covers AI systems that raise no or minimal risk, while the very tip is reserved for unacceptable risks. “If you are to trust something, well, then you need to be able to mitigate the risks,” Vestager said. “The higher the risk, the stricter the rule must be.”
Applications that would face no restrictions include those that would block spam emails from entering your inbox and technologies that cut fabric in the most efficient way to minimize waste, she said.
Limited-risk cases include chatbots that help in customer service, for example with buying tickets, and these would face transparency obligations, so that consumers are made aware they are interacting with a machine.
High-risk uses are the main focus of the framework, as these are often the most complex and have the potential to be biased or discriminatory. Examples in this category include software that helps sift applicants for jobs or universities or for financial products. Here strict rules governed by the five key principles would apply.
Lastly, there are uses that would be banned altogether, those that harness subliminal techniques to cause physical or psychological harm. For example, voice assistance used to manipulate a child or applications that rank people based on their behavior.
Far from being heavy-handed, making sure the technology is as safe as possible and well-regulated will open a gigantic market, she said.
Being proportionate and enforceable in this way, the legal framework aims to guarantee safety and rights while strengthening AI uptake, investment and innovation.
“What we’re trying to do here is to build trust that if there is a risk, the risk can be mitigated so that we can make best use of the technology,” Vestager said. “And that of course is also why we want to give businesses the best possible access to build AI.”