Sharing the benefits of AI-driven growth

Code

Today sees the publication of the European Commission’s AI High-Level Expert Group Policy and Investment Recommendations for Trustworthy AI. These aim to ensure that all European citizens and businesses can reap the benefits of innovative AI technologies. At Microsoft, we believe that three areas in particular are crucial: skills, data sharing, and governance. We also encourage the EU to consider developing appropriate regulations to guide the use and development of facial recognition technology.

On the topic of skills, we know that many business leaders struggle to get started with implementing AI across their companies, with challenges ranging from changing company culture to ensuring that AI is being deployed responsibly. To help European businesses with this transition, and to ensure that the conditions are in place to encourage the uptake of new technologies, Microsoft recently launched an AI Business School, sharing insights and practical tips on how to create a data-driven, collaborative company culture and apply AI in a strategic yet complaint way.

An essential principle underpinning the development of AI-based technologies as part of Europe’s data economy is data sharing. Unfortunately, both internal siloes and limitations on external data sharing mean that many organizations are unable to use data to generate new insights, identify new business opportunities, and better serve their customers. The Commission’s recommendations rightly highlight how overcoming such obstacles depends on responsibly governing the development and sharing of datasets intended for training models. But to ensure companies can feel confident sharing data both internally and externally, we also need a framework encompassing standards, licensing models, and solutions, where necessary, that guarantee data confidentiality.

Ultimately, people will only use technology that they trust. To increase trust in AI, we must observe principles for responsible development and deployment and respect for fundamental rights. Technology companies must abide by such principles. But we also need a sound governance model that includes risk management frameworks and is accompanied by legislation where necessary. Such a model would help ensure that AI products or services are trustworthy throughout their lifecycle.

We believe that a clear AI lifecycle is the underlying backbone for applying organizational risk management processes to AI systems and can help describe how those systems will be used. Tools that promote more responsible development and use of AI should also be encouraged. To that end, Microsoft has published papers on questions related to business understanding, common challenges related to training data for machine learning models (Datasheets for Datasets, Differential Privacy), modeling tools for AI systems and intelligibility of machine learning systems.

Where necessary, the EU should consider introducing binding instruments for AI to uphold the rule of law and safeguard fundamental rights. One area where legislation is urgently required relates to the use of facial recognition technology.

Creating a regulatory framework that guards against risk while still ensuring that technology can be used effectively requires a thoughtful approach, rooted in clear understanding of how the technology works, what it can or cannot do, and how to achieve accurate results. Microsoft has already implemented a set of ethical principles to govern the development and deployment of our facial recognition technology. We have also published a Transparency Note for our Azure Face API as part of our broader effort to implement our ethical principles on facial recognition.

While these principles and practices are helping us develop and deploy our technology, the risks associated with facial recognition technologies demand immediate policy action. While the General Data Protection Regulation and the Data Protection Directive for Law Enforcement are important baselines, they do not fully address issues that may arise.

For example, regulation should require companies developing and marketing such technology in Europe to enable third party testing for accuracy and unfair bias.

Moreover, tech companies that offer facial recognition services should be required to provide documentation that explains the capabilities and limitations of the technology in terms that customers and consumers can understand. Our Face API Transparency Note, is an example for providing such transparency.

Microsoft is working to advance discussions around these vital topics. Starting today, we will participate in the pilot phase of the assessment list of the ethics guidelines on trustworthy AI. We appreciate the work of the High-Level Expert Group on AI, helping the EU lead the way in shaping a future for AI rooted in ethical principles, fundamental rights, and the rule of law. As this debate continues to evolve and new issues emerge, we welcome Europe’s leadership on this topic and look forward to working together alongside other companies, researchers, civil society, and policymakers to forge a responsible, inclusive pathway for AI adoption.

Tags: ,

Cornelia Kutterer
Senior Director, Rule of Law and Responsible Tech, European Government Affairs, Microsoft

Cornelia is responsible for AI, privacy and regulatory policies in the EU with a focus on digital transformation and ethical implications. She leads a team working on corporate and regulatory affairs, including competition, telecom and content policies. She has long standing experience in Information Society & Internet policies at European level and speaks regularly at regional and international conferences. Previously, Cornelia was Senior Legal Advisor at BEUC, the European Consumer Organisation, heading up the legal department and driving the policy agenda for consumers’ digital life with a focus on intellectual property, data protection and e-commerce. She has also gained experience in a top 10 law firm in the fields of competition law and regulatory affairs and in a German organisation focusing on the freedom of services and labour law. She started her professional career in the European Parliament as a political advisor to an MEP in 1997. Cornelia is a qualified German lawyer, and holds a master’s degree in information technology and telecommunication laws. She studied law at the Universities of Passau, Porto (Portugal), Hamburg and Strathclyde (UK).