A Balancing Act : Regulating AI to boost responsible innovation in Europe

Working with European customers, we witness the wide range of ways organizations are innovating with AI. AI helps create new products and services, improves those currently available, and can help tackle key societal challenges. Put simply, AI has the potential to transform our society and every sector in it, from agriculture to healthcare, education, and the environment.

However, AI systems also raise questions around how to ensure this powerful new tool can be used in a way that is responsible and ensures that some of the potential harms are mitigated.

That’s why Microsoft’s responsible AI program has established internal processes, including practices and tools to help colleagues uphold our AI principles. As part of this work, we have long supported the goal of creating a regulatory framework for AI, including in Europe, in particular to set common guardrails for high-risk scenarios.

Six months ago, the European Commission led the way by publishing its landmark proposal on regulating AI. It presents an ambitious and important step toward making trustworthy AI the norm in Europe and beyond and we support the AI Act’s vision and direction. Thoughtful regulation of AI can propel Europe into a hub for innovation and human-centric AI. At the same time, a too prescriptive approach may inhibit thriving AI ecosystems in Europe and be counterproductive to achieving those goals. It’s a fine balance.

We have offered comments on this proposal, based on the lessons we have learned from working with customers and from our own journey building out our responsible AI program. We have submitted suggestions on where we see opportunities to strengthen the proposal to better reflect the unique complexities of the AI ecosystem. Suppliers should be responsible for the systems they develop and deployers of AI systems need to ensure appropriate protections for individuals that the system may affect.

1. Calibrate obligations of different actors

Based on the way our partners and customers use our technology to innovate and create value, in some cases the same system can be both high-risk and low risk.

For example, a restaurant might choose to use a text analytics service to scan large numbers of customer reviews for positive feedback, a relatively low risk application. The same restaurant could use the very same service to scan CVs for key words as part of shortlisting job applicants with certain skills, which could have fundamental rights implications if not subject to appropriate controls.

This example is meant to highlight the complexity of how AI systems are used. We suggest that consideration be given to who in the ecosystem is best suited to meet certain obligations, including adding the role of ‘deployer’ responsible for the implementation of AI systems as well as to carry over the reference to ‘technology suppliers’ from Recital 60 into the main body of the AI Act. Accountability should be assigned to those closest to the potentially impacted citizens, particularly for high-risk service scenarios listed in Annex III.

AI suppliers frequently have no visibility as to how customers deploy these services into their own systems. This is not to say that suppliers of AI do not also bear accountability to develop their systems responsibly, including testing, providing transparency, and adopting governance procedures.

2. Focus on outcomes and processes

We also see further opportunity to strike a better balance between regulation and innovation, by introducing outcome and processed-based requirements for high-risk AI systems.

The challenges that AI raises are sociotechnical, influenced by a complex system of interrelated aspects, ranging from technical capabilities and limitations to the societal context of its uses.

We applaud the ambition to establish a single, uniform set of requirements that would apply across the broad and varied universe of AI products, services and deployments and believe this goal is achievable. Yet, as currently drafted, many of the AI Act’s requirements are prescriptive or focused on specific scenarios. As a result, they may be workable in some cases, but ineffective in others. This may inadvertently also hinder innovation in the field of trustworthy AI research.

3. Address the sociotechnical nature of AI risks

Just as the grounding of the AI Act in the New Legislative Framework (NLF) creates challenges in allocating obligations, so does the suggested post-market monitoring regime.

To better address the socio-technical nature of these challenges and the realities of the AI ecosystem value chain, the obligations of post-market monitoring of AI systems should be allocated closest to its uses.

We see additional opportunities to promote innovation and strengthen fundamental rights protections for affected individuals, including in relation to law enforcement use of remote biometric identification technology.

The coming months will provide more occasions to debate and refine the current text to reflect the input from all stakeholders. As part of Tech Fit for Europe, we are committed to play our part in helping the EU embrace AI technologies safely and in ways that respect fundamental rights and European values. In this spirit, we will continue to engage constructively by sharing our thoughts and experience.

To learn more, we invite you to read Microsoft’s full response: Microsoft’s Response to the European Commission’s Consultation on the Artificial Intelligence Act

Tags: ,

Microsoft Corporate Blogs