The power of AI comes with a powerful responsibility

People gather around a table in conversation, with their focus on a screen on the table

I’m beyond excited to be here in London with my Microsoft colleagues, as well as innovators, researchers, experts and business decision-makers from around the world at Future Decoded. Over the next two days, we will hear inspiring stories about the possibilities that exist for artificial intelligence to transform the future of work in every industry – and how critical it is that businesses foster a culture that includes everyone as we search for ways to incorporate AI responsibly.

This morning’s announcement that Microsoft is collaborating with Novartis to use AI to develop treatments and medications faster has the potential to improve patients’ lives across the globe. A critical component of our work together is the commitment by Novartis to take AI across the entire organization.

This will enable Novartis to bring together previously siloed data sets and research, and to use AI to build upon existing work quickly and efficiently. But it will also do something that might be even more important: It will empower Novartis associates.

Whether they work in research and development, commercial, operations, finance or elsewhere, Novartis associates are being asked to join this AI transformation. Their contributions and voices matter and are vital to the organization’s success.

Advocating a holistic approach

A cultural transformation is required for a company like Novartis to implement an AI strategy successfully throughout the organization. It requires empathetic leadership, collaboration across departments, trust among employees and a willingness to accept change. It’s not an easy feat. We at Microsoft know this because we haven’t always gotten it right the first time out of the gate. We are happy to share our learnings and best practices with our partners and customers, and with business decision-makers at large through our AI Business School, a free online master class series.

We launched AI Business School because we knew AI will be used more and more to help businesses innovate and solve problems, and we wanted to help business leaders be ready to do so with confidence. We recognized that every industry in the private and public sector faces its own challenges, and we wanted to provide concrete examples for each of them through tailored information and real-world case studies. Today, we are excited to roll out a new release of AI Business School, with expanded information for government leaders, new and adapted lessons within our responsible AI module and a new learning path for education industry decision-makers and educators.

Responsible AI: The expanded responsible AI content aims to illustrate how organizations can put principles into practice. As an example, we share a view on design principles for building AI solutions, plus a video on what tools can help you develop responsible AI.  We also have a new video Q&A with Matt Fowler, VP and Head of Machine Learning, Enterprise Data and Analytics at TD Bank Group, who talks about his company’s AI journey. Plus, trusted AI expert Cathy Cobey from EY shares how to make governance both tangible and measurable.

Education: We teamed up with education experts including Michelle Zimmerman, author of “Teaching AI: Exploring New Frontiers for Learning,” to highlight ways AI can transform classrooms as well as the operations and processes of learning institutions. We know that educators and administrators at every level of education are being asked to do more with less, and AI can help.

Government: A new module about identifying governing practices for responsible AI in government draws on the wisdom from experts at EY and Altimeter Group. We share examples from governments around the world to shed light on what government officials should consider and how to take action.

In addition to continually bolstering the online learning experience, we partner with customers around the globe for in-person training and collaboration. For example, UK enterprise customers will soon be able to participate in AI Business School sessions in the Microsoft Store in London!

I believe that helping everyone understand how to better approach AI can be a boon to every industry, and to society at large. I have been overwhelmed by the feedback and engagement with AI Business School, and I am humbled and grateful for the many conversations it has enabled with customers and business leaders!

One such customer is TD Bank, whose leaders have sought to advance an industry-wide dialogue on what responsible AI looks like in financial services. Microsoft works with TD on a variety of fronts as the bank continues to advance its AI capabilities.

People sit in front of a white board. Written on the whiteboard are words inculding fairness, blindspots and explainability
TD hosts an industry roundtable on responsible AI. The organization’s leaders have sought to advance a dialogue on what responsible AI looks like in financial services. Photo by TD Bank.

Adapting to an AI-first world

As AI is adopted across financial services, TD’s leaders believe it’s a critical time to initiate an industry-wide discussion on the unique opportunities and challenges of this technology. TD recently released a report called Responsible AI in Financial Services that brought together perspectives from AI experts and consumers to inform key areas where the financial services industry needs to focus to build best practices for the responsible use of AI. The three areas of focus identified in the report – addressing explainability, controlling for bias and promoting diversity – are informing TD’s work as they develop AI-powered solutions and unlock new and innovative ways to meet customer needs.

Microsoft encourages each of our partners and customers to embed their organizational values into every aspect of their AI strategy. Our own core principles – fairness; reliability and safety; privacy and security; inclusiveness; transparency; and accountability – inform how we develop and design AI.

We continue to invest in the research and creation of tools that can tackle the challenges of bias, privacy, security and interpretability. Just last week we announced a partnership with Harvard University to develop a service for differential privacy that will open new possibilities for groundbreaking research while also protecting sensitive information.

And last month we joined forces with other industry leaders to improve the detection of AI-generated deepfakes. We will continue to make every effort to ensure that this technology we work so hard to advance will be used in ways that will also advance society. Because it is not enough to know that we CAN do something with the help of AI; it is vital that we first ask whether we SHOULD.

As I see it, the biggest potential that AI holds is its ability to help us work together to tackle our toughest problems. I see its possibility to bring people together, to improve lives and to help save our planet. One of our AI for Earth grantees, global nonprofit OceanMind, is doing just this: They are using AI to detect illegal and unregulated fishing, which helps authorities protect ocean life and promote sustainability.


YouTube Video

The responsible creation and use of AI is not the job of any one company, but one that we all share, and it requires the shared responsibility to think about not just what AI can do, but what it should do.  Our overarching goal is to empower everyone to innovate and use AI responsibly so that it reflects their positive goals, good intentions and core values.