How Europe is moving forward with AI standards

 |   Microsoft Corporate Blogs

Data Science & Law Forum 3.0 - Operationalizing Responsible AI

When it comes to AI, the issues of most concern are not just technical but also revolve around trust, the risks of bias, and the ethics that underpin system development.

This was one of the points made by Geraldine Larkin, CEO of the National Standards Authority of Ireland, who was the moderator of the Demonstration mechanisms for AI accountability session during Microsoft’s third Data Science & Law Forum.

It’s a point that set the scene for much of the ongoing debate and dialogue around the need to develop trustworthy, responsible AI systems.

“All of these issues require regulatory guidance,” Larkin said. “How do we know that the outcomes are being monitored? How do we know these systems are being trained properly? What happens if somebody takes the software and uses it for a purpose other than that which it was originally intended for?”

Discussing these questions on a panel were Patrick Bezombes, Co-chair of the AI focus groups, CEN and CENELEC; Maximilian Poretschkin, Senior Data Scientist at the Fraunhofer Institute; Salil Gunashekar, Research Leader at RAND Europe; and Jason Matusow, General Manager, Global Standards at Microsoft.

A patchwork of standards

Patrick Bezombes of CENELEC observed that there is a patchwork of standards – some international or regional and some domestic – affecting technology, services, consumer rights, and so on. That could create problems, he cautioned. “It’s really hard to see which one is useful, or how to then interconnect [with it]. And this is just the beginning – in the coming years we will see tens, hundreds of these. This could become a nightmare. And we don’t want AI regulation being inadequate, we don’t want standardization to be a nightmare.”

For Europe, with its clear ambition to regulate the development and deployment of AI, establishing the right framework for setting standards will be a vital part of the process.

Salil Gunashekar, Research Leader at RAND Europe, explained that his organization is engaged in research that might help unlock potential approaches to solving this problem. An important starting point, he explained, would be understanding the needs of different users.

“Businesses or economic operators working in this space could be awarded a quality label for their AI applications, to provide an indication to the market signaling that AI application’s trustworthiness,” he said.

For customers and end-users, this kind of label would enable the easy identification of AI applications that comply with set standards in the future. “This approach would help improve the trust of users in AI products and services and thereby potentially promote the overall uptake of the technology,” he continued.

In Germany, Gunashekar said, there is a project already underway to develop a system of labeling like this, which takes its inspiration from the EU’s energy efficiency ratings program for domestic appliances. “It includes an evaluation of a number of ethical values, which an AI product or service should have,” he explained. “Things like transparency, privacy, accountability, justice, reliability, and sustainability. And then each of these six values would receive a rating.”

A three-step plan

A dynamic set of rules is needed to help establish any such scheme, according to Maximilian Poretschkin of the Fraunhofer Institute, who laid out three crucial stages to achieving that.

“The first step is a code of conduct for artificial intelligence. But this needs to be operationalized within an organization,” he said, before adding that some businesses may face the double challenge of being properly aligned with existing rules before being ready for future rule changes.

The second step concerns the way AI is used across supply chains. “Let me give you a concrete example of our current work,” he said. “We currently have a project with an industrial customer who wants an independent test of the performance of an AI-based quality control system.

“This customer’s control system is, in turn, based on a third-party optical character recognition (OCR) service, and within the project it’s difficult to address the quality of the third-party OCR system directly. This shows that compliance with quality standards is linked to requirements for suppliers and third-party providers.”

The third stage is complicated by the very nature of AI – it is not meant to be a static piece of technology but will be constantly changing and adapting. “AI systems are highly dynamic,” Poretschkin said. “Even if they don’t continue to learn during operation, they can change their behavior due to changes within the operational environment.”

Such use-case changes will complicate compliance for regulation bodies as well as those using AI.

A set of shared ideals

Pulling these disparate needs and divergent challenges together in a coherent manner will call for close cooperation in the creation of mechanisms for control and compliance. Jason Matusow talked about the view held at Microsoft that new sets of rules are needed for new kinds of technology.

“We at Microsoft have long held the position that AI regulation is necessary, but also stated it should be regulated with modern laws that reflect the realities of how machine learning and artificial intelligence technologies are developed, used, and maintained. And I think that’s what we’re seeing right now,” he said, referring to the EU’s recently published proposals on AI regulation.

There may be a temptation when regarding the debate around AI regulation to focus on the challenges and difficulties. But Matusow pointed out that, at the heart of the debate, there are a number of very important commonalities.

“If you read through 10 or more [AI regulatory] papers from countries all over the world, you find that they were all essentially saying the same thing. First, the country or region wants to be competitive with AI to benefit its society and economy. Second, they don’t want Terminator. They don’t want AI to harm citizens or damage societal structures. So you have a natural tension between competitiveness and the drive to use the technologies, and the need for things to be done responsibly.”

Tags: ,

Microsoft Corporate Blogs