Creating a common approach to Responsible AI development in Europe

Data Science & Law Forum 3.0 - Operationalizing Responsible AI

The European Union has made it clear that artificial intelligence (AI) must not diminish human rights and freedoms in its recently proposed regulation. Creating a common approach for responsible AI development will be an important part of ensuring its use will benefit Europeans. That will require close collaboration between a variety of stakeholders, including the technology industry.

That was one of the topics discussed by the panellists in the What’s next for operationalizing AI? during Microsoft’s third Data Science & Law Forum.

Gathered together for the session were, Eva Kaili, Member of the European Parliament, Irina Orssich, AI Team Leader at DG CONNECT, European Commission, and Natasha Crampton, Chief Responsible AI Officer, Office of Responsible AI at Microsoft. It was chaired by Justin Nogarede, Digital Policy Advisor at the Foundation for European Progressive Studies.

During an earlier session that day, Regulating AI in the EU: A conversation, Margrethe Vestager, European Commission Executive Vice-President said: “Let’s have artificial intelligence everywhere where it makes a difference.” How industry and regulators move toward that goal sparked a healthy debate among the panellists.

Creating a safe space to innovate

Explaining how the proposal came into being, and how it will eventually be used to help define a common European approach to AI, Irina Orssich, AI Team Leader at the European Commission, explained that the EU has a challenging goal to reach – balancing regulation with innovation.

“We need to see the proposals in the context of all the other European laws we have – the data protection rules, the fundamental rights, consumer protection rules, the anti-discrimination laws,” she said. “The regulations should be easy to use – easy and predictable for companies. We wanted to create legal certainty, and we have to strike the balance between fundamental rights where we do not want to see violations, and between also our capacity to innovate.”

One other major challenge for creating a standardized framework for responsible AI development and deployment in Europe is the fact that the digital space does not neatly align with national borders. That raises the question of how the EU can enforce a set of AI standards in cases where neither the provider nor deployer of the AI systems may be based in an EU member state.

Orssich pointed out that the EU already sets standards that are followed internationally, including data protection, consumer rights and product safety. It was an observation that echoed the sentiments expressed by Executive Vice-President Vestager in an earlier session, and reiterated by Microsoft’s Natasha Crampton: “It was encouraging to hear Executive Vice-President Vestager talk about the importance of having those dialogues with other governments, considering similar approaches, and [reaching] globally recognized standards,” she said. “Ideally, we’d get to a place where we would have mutual recognition of things like conformity assessments, so that we’re not spending time duplicating things a cross jurisdictions.”

Ambitions that aim deliberately high

MEP Eva Kaili highlighted that the Commission wants to get the best end-results but that changes to the proposal are to be expected as part of the legislative process.

Stressing the EU’s aspirations to be a beacon of fairness in the use of AI, Kaili explained that there are many potentially negative consequences that she and her fellow MEPs are seeking to avoid.

“We are trying to safeguard our human rights, that the fundamental rights of Europe will be translated in the digital era as digital rights. And also to make sure [everyone] will benefit from AI by avoiding the implications or the consequences that might happen if it falls into the wrong hands, or those that want to maximize profit or to manipulate perceptions.”

Her main point, however, remained that the proposal published by the Commission should be regarded as a starting point: “Of course, the devil is in the details,” she said. “But I think if we move ahead based on our principles of defending our rights and quality of life, I think we will get it right in the end.”

The pursuit of fairness

Microsoft’s Natasha Crampton acknowledged the Commission’s leadership in developing the proposals to make trustworthy AI the norm in Europe. While being encouraged by the risk-based approach, she said practical examples are needed to assess whether the mandatory obligations on providers of high-risk systems lead to the desired policy outcomes. Here she pointed out a possible challenge relating to one of the European Commission’s intentions. “I don’t think I’m the first person to observe that an error-free standard for datasets is not possible to achieve in practice,” she said.

She also highlighted that the pursuit of fairness calls for nuance. In the Commission’s proposals for harmonized rules on AI, Crampton said, there is “a great deal of emphasis on achieving fairness through data quality. In other words, it suggests that if you think carefully about the representativeness of data used to train and test your AI system, then you go a long way towards making sure that your system is fair, and not discriminatory.”

While that might work for some types of data, she went on to explain, it won’t be applicable for them all.

“What we prefer to do at Microsoft is to take an end-to-end lifecycle approach to fairness, to make sure that we’re thinking about data quality, but also about mitigations beyond that, including how the model is designed and built, and whether we should use blocklists, or other technical measures to disallow inappropriate content.”

Tags: ,

Microsoft Corporate Blogs