Trust and Excellence Made in Europe: Our Response to the EU White Paper on AI

Woman in field

Artificial Intelligence (AI) is among the most powerful technologies of our time. It is having nothing less than transformational impact on our – increasingly digital – lives in the 21st century. In recent years, institutions and governments around the world have begun to realize the need to address both the opportunities and societal challenges raised by AI. Increased attention is being paid to how we outline the guardrails that must accompany the development and deployment of AI. That is good news. The hard question is how to do it. But the European Union is taking an important step to help setting the world on a path towards trust and excellence around AI.

In February, the European Commission published its initial proposal for a regulatory framework for AI and opened a stakeholder consultation. Today we have published our response.

Let me share a couple of highlights here:

Encouraging the uptake of AI in a way that advances trust and respects European values is a goal we share at Microsoft, and we commend the Commission for its initiative in proposing a regulatory framework for high-risk AI and its ambition to make Europe a world leader in this area.

The vast positive potential of AI and its applications across all sectors of societies and economies represents a paradigm shift in computing, which is only just beginning. We firmly believe in the positive and empowering potential of AI for European businesses and citizens. Every day, we see people using AI to tackle major societal challenges, create new products and services, and improve the quality and security of existing offerings across the EU, even during the COVID-19 pandemic.

But recent developments have also demonstrated the importance of addressing social inequalities. Ensuring that AI is not part of the problem requires close scrutiny of how we use AI and how we collectively develop and deploy it in a way that promotes our shared societal goals. We all have a duty to act ethically and responsibly and embed key principles of transparency, accountability, security, privacy, safety, and inclusiveness into our organizational cultures.

We also need solid regulatory frameworks to mitigate possible harms. To this end, our main suggestions in response to the EU’s proposed AI regulatory framework are to:

  • Incentivize AI stakeholders to adopt governance standards and procedures for operationalizing trustworthy AI. For example, developers should be transparent about limitations and risks inherent in the use of any AI system. If this is not done voluntarily, it should be mandated by law, at least for high-risk use cases.
  • Leave space for positive uses of AI. Ensure that the cost of complying with requirements does not prevent products and services from reaching the market, when the use of AI can make these safer and better.
  • Differentiate types of harm as risks to safety and fundamental rights require different rules and compliance regimes.
  • Clarify which requirements apply to which actors (e.g. developers, deployers or other end-users) and impose responsibilities on those best placed to comply with them.
  • Rely on existing laws and regulatory frameworks as much as possible. Where new laws are needed, then they should be adopted.

We believe the EU is uniquely suited to establish an ecosystem of excellence around AI. Europe’s diversity of cultures, traditions and perspectives all serve as tremendous assets in building and maintaining a cutting-edge research and innovation community. Microsoft supports EU research and innovation through multiple initiatives and will continue to invest in this area. Excellence in research will also allow European businesses to more quickly develop and leverage unique AI-driven solutions. This, in turn, will foster widespread economic progress and help address major societal challenges such as climate change, access to healthcare and the modernization of public services.

Microsoft is committed to helping the EU make AI work for every European, by partnering across the public sector, academia, civil society, and industry. Driving trust in AI is an ongoing journey but it is fundamental to encouraging its wider adoption. In some cases, this might require new laws, or new interpretations of existing laws. The challenge is to balance the mitigation of potential harms with the promotion of the vast positive potential of AI. We welcome the opportunity to work with the Commission and other stakeholders to make trustworthy, responsible AI the norm across Europe and beyond.

In 2016 Microsoft announced a set of human-centered principles to guide the ethical creation and use of AI, which was further developed in ‘The Future Computed: Artificial Intelligence and its role in society’, released in 2018. These principles guide our approach to trustworthy AI, from design and development to deployment. Today, we are putting our principles into practice by embracing diverse perspectives, fostering continuous learning, and proactively responding as AI technology evolves. We are also working with others across industry, academia, and civil society on ways to operationalize these principles across the full lifecycle of AI.

Tags: ,

Casper Klynge
Vice President of European Government Affairs