Will the EU’s planned AI regulation safeguard fundamental rights and encourage innovation?

The European Commission does not want to impose unnecessary restrictions on the development of artificial intelligence (AI). But it does want to carefully manage the ways in which AI can be used. That was the message from Lucilla Sioli, Director for Digital Industry within DG CONNECT at the European Commission. Sioli was speaking during the “Making sense of Europe’s legislative framework” panel at this year’s AI Summit hosted by POLITICO.

Held virtually on 31 May, the panel explored some of the complexities facing policymakers, the technology industry, and civil liberties groups when attempting to build an appropriate regulatory framework around AI in Europe. Balancing the competing demands of those who want to push the boundaries of innovation with those who fear the technology could be misused, is at the heart of the debate. The same consideration was also at the heart of the panel’s discussion, which featured:

● Lucilla Sioli, Director for Artificial Intelligence and Digital Industry, DG CONNECT, European Commission
● Daniel Leufer, EU AI policy lead, Access Now
● Dragoș Tudorache, MEP (Renew Europe, Romania) and chair of the AIDA committee European Parliament
● Cornelia Kutterer, Senior Director, European Government Affairs, Rule of Law and Responsible Tech, Microsoft
● Moderator Melissa Heikkilä, AI correspondent at POLITICO

What is being regulated?

The European Commission’s recently proposed Artificial Intelligence Act was announced in April 2021. As Lucilla Sioli stated early in the debate, the focus of the proposal is not the technology itself. Instead, the Commission wants to regulate some of the uses to which AI is put.

“We are regulating the use of AI systems,” Sioli explained to the audience and fellow panelists, saying that the Commission had compiled “a list of use cases that we would like to be checked before they are put on the European market.”

Cornelia Kutterer, Senior Director, European Government Affairs, Rule of Law and Responsible Tech at Microsoft, stressed the importance of clearly defining high-risk use-cases for AI and ensuring that emphasis is put on restricting such uses, not the development of the technology itself.

“I think that the Commission did a great job in parsing out what is high-risk or what should be banned,” Kutterer said. “And we support this approach of focusing on high-risk scenarios.”

Getting the banned list right

Some uses of AI will be banned outright by the Commission’s proposed laws. Social scoring is one such example that Sioli mentioned. This is the controversial tactic of using an individual’s social media profiles as a source of additional data to determine things like their suitability for a job or a financial service. Other uses – and here she listed chatbots, deep fakes, emotion recognition systems – will be made more transparent “so that people know what and who they’re dealing with.” There will also be exceptions for law enforcement agencies to use remote biometric identification in ways that would otherwise be more widely prohibited,” she explained.

But for some, this remains a problematic grey area. Among them is Daniel Leufer, EU AI policy lead at Access Now, an advocacy organization that campaigns for comprehensive human rights protection where the use of technology is concerned. While he and Access Now are pleased to see a list of AI prohibitions included in the proposal, they do not believe it goes far enough.

“We think the exceptions are far too broad, they provide far too much leeway,” Leufer said. Having “complete leeway for the installation of cameras, for the purchasing of the technology” was akin to “just regulating when the switch can be turned on and off, which is highly problematic.”

He read out one of the definitions contained in Article 3 of the Proposal, as an example of one of the problems Access Now has identified. “It says a biometric categorization system is ‘an AI system for assigning natural persons to specific categories on the basis of their biometric data.’ And then it lists some of those categories: sex, age, hair color, eye color, tattoos, ethnic origin, or sexual or political orientation.”

That is, he said: “A crazy definition, because … you can perfectly well have a machine learning system assign people to hair color categories based on observable biometric data. You can’t do that with political orientation.”

Bridging the divide

Resolving these seemingly polar-opposite views will not be straightforward. Dragoș Tudorache, the Romanian MEP (Freedom, Unity and Solidarity Party) and Chair of the European Parliament’s Special Committee on Artificial Intelligence in a Digital Age, said these debates will have to be addressed in the European Parliament. “Here in Parliament, there are political groups which have been asking for quite some time for a total ban on the use of such technologies in public spaces,” he said. “So I think this will be a big discussion … and it remains to be seen how the different political groups will (be positioned) on this.”

But while stressing the importance of supporting privacy and human rights, Tudorache warned that too much red tape around the use of AI could harm the ability of European technology companies to innovate and compete globally. Closer trans-Atlantic cooperation is also needed, he said: “I feel that on this issue, there’s quite a strong majority in the European Parliament across the political groups to engage in this sort of dialogue with the U.S.”

Balancing needs and looking ahead

As a business that is already well-versed in the development of AI technology, Microsoft has reached a number of significant decisions on its appropriate use. “We have, as early as 2018, started to advocate for facial recognition technologies (and their use) to be regulated, and aligned our business practices to our respective commitments,” Kutterer said. “We support the approach generally that the Commission has taken on facial recognition specifically”; noting that the specific exceptions and safeguards are now to be decided by the legislators and adding that additional transparency obligations imposed on the police would improve the proposal and accountability overall.

More work will be needed, she continued, to ensure technology companies have clear guidance and regulations to work with, as well as to ensure businesses aren’t swamped with unrealistic obligations.

“For example, in the current draft, the provider will have to define the intended use,” Kutterer said. “But there are gazillions of potential uses, in particular where you have foundational AI systems, while there is no comprehensive transparency for the citizens to then be able to see when those AI systems are used – and that would probably be most helpful to citizens regarding their fundamental rights.”

Asked about the regulation’s effect on innovation, Kutterer reflected on the impact on SMEs in particular. The current requirements for certifying compliance and providing assurances that its use will remain within the legal framework, is one aspect of the regulation that could create hurdles to innovate, she suggested.

But despite the inevitable differences and disagreements, Tudorache, from a lawmakers’ perspective gained as an MEP, remains positive. “All in all, I’m positive we can set this in motion,” he said. Whereupon, he continued, Europe must become an ambitious, forward-looking partner in the AI sector, forging strong links with the US. “It should be a cooperation where we identify together very concrete projects (where) we can work on points of common research and innovation that we can agree upon,” he said.

Tags: ,

Microsoft Corporate Blogs