Meeting the AI moment: advancing the future through responsible AI

overhead shot of people holding puzzle pieces

Early last summer, a small group of senior leaders and responsible AI experts at Microsoft started using technology from OpenAI similar to what the world now knows as ChatGPT. Even for those who had worked closely with the developers of this technology at OpenAI since 2019, the most recent progress seemed remarkable. AI developments we had expected around 2033 would arrive in 2023 instead.

Looking back at the history of our industry, certain watershed years stand out. For example, internet usage exploded with the popularity of the browser in 1995, and smartphone growth accelerated in 2007 with the launch of the iPhone. It’s now likely that 2023 will mark a critical inflection point for artificial intelligence. The opportunities for people are huge. And the responsibilities for those of us who develop this technology are bigger still. We need to use this watershed year not just to launch new AI advances, but to responsibly and effectively address both the promises and perils that lie ahead.

The stakes are high. AI may well represent the most consequential technology advance of our lifetime. And while that’s saying a lot, there’s good reason to say it. Today’s cutting-edge AI is a powerful tool for advancing critical thinking and stimulating creative expression. It makes it possible not only to search for information but to seek answers to questions. It can help people uncover insights amid complex data and processes. It speeds up our ability to express what we learn more quickly. Perhaps most important, it’s going to do all these things better and better in the coming months and years.

I’ve had the opportunity for many months to use not only ChatGPT, but the internal AI services under development inside Microsoft. Every day, I find myself learning new ways to get the most from the technology and, even more important, thinking about the broader dimensions that will come from this new AI era. Questions abound.

For example, what will this change?

Over time, the short answer is almost everything. Because, like no technology before it, these AI advances augment humanity’s ability to think, reason, learn and express ourselves. In effect, the industrial revolution is now coming to knowledge work. And knowledge work is fundamental to everything.

This brings huge opportunities to better the world. AI will improve productivity and stimulate economic growth. It will reduce the drudgery in many jobs and, when used effectively, it will help people be more creative in their work and impactful in their lives. The ability to discover new insights in large data sets will drive new advances in medicine, new frontiers in science, new improvements in business, and new and stronger defenses for cyber and national security.

Will all the changes be good?

While I wish the answer were yes, of course that’s not the case. Like every technology before it, some people, communities and countries will turn this advance into both a tool and a weapon. Some unfortunately will use this technology to exploit the flaws in human nature, deliberately target people with false information, undermine democracy and explore new ways to advance the pursuit of evil. New technologies unfortunately typically bring out both the best and worst in people.

Perhaps more than anything, this creates a profound sense of responsibility. At one level, for all of us; and, at an even higher level, for those of us involved in the development and deployment of the technology itself.

There are days when I’m optimistic and moments when I’m pessimistic about how humanity will put AI to use. More than anything, we all need to be determined. We must enter this new era with enthusiasm for the promise, and yet with our eyes wide open and resolute in addressing the inevitable pitfalls that also lie ahead.

The good news is that we’re not starting from scratch.

At Microsoft, we’ve been working to build a responsible AI infrastructure since 2017. This has moved in tandem with similar work in the cybersecurity, privacy and digital safety spaces. It is connected to a larger enterprise risk management framework that has helped us to create the principles, policies, processes, tools and governance systems for responsible AI. Along the way, we have worked and learned together with the equally committed responsible AI experts at OpenAI.

Now we must recommit ourselves to this responsibility and call upon the past six years of work to do even more and move even faster. At both Microsoft and OpenAI, we recognize that the technology will keep evolving, and we are both committed to ongoing engagement and improvement.

The foundation for responsible AI

For six years, Microsoft has invested in a cross-company program to ensure that our AI systems are responsible by design. In 2017, we launched the Aether Committee with researchers, engineers and policy experts to focus on responsible AI issues and help craft the AI principles that we adopted in 2018. In 2019, we created the Office of Responsible AI to coordinate responsible AI governance and launched the first version of our Responsible AI Standard, a framework for translating our high-level principles into actionable guidance for our engineering teams. In 2021, we described the key building blocks to operationalize this program, including an expanded governance structure, training to equip our employees with new skills, and processes and tooling to support implementation. And, in 2022, we strengthened our Responsible AI Standard and took it to its second version. This sets out how we will build AI systems using practical approaches for identifying, measuring and mitigating harms ahead of time, and ensuring that controls are engineered into our systems from the outset.

Our learning from the design and implementation of our responsible AI program has been constant and critical. One of the first things we did in the summer of 2022 was to engage a multidisciplinary team to work with OpenAI, build on their existing research and assess how the latest technology would work without any additional safeguards applied to it. As with all AI systems, it’s important to approach product-building efforts with an initial baseline that provides a deep understanding of not just a technology’s capabilities, but its limitations. Together, we identified some well-known risks, such as the ability of a model to generate content that perpetuated stereotypes, as well as the technology’s capacity to fabricate convincing, yet factually incorrect, responses. As with any facet of life, the first key to solving a problem is to understand it.

With the benefit of these early insights, the experts in our responsible AI ecosystem took additional steps. Our researchers, policy experts and engineering teams joined forces to study the potential harms of the technology, build bespoke measurement pipelines and iterate on effective mitigation strategies. Much of this work was without precedent and some of it challenged our existing thinking. At both Microsoft and OpenAI, people made rapid progress. It reinforced to me the depth and breadth of expertise needed to advance the state-of-the-art on responsible AI, as well as the growing need for new norms, standards and laws.

Building upon this foundation

As we look to the future, we will do even more. As AI models continue to advance, we know we will need to address new and open research questions, close measurement gaps and design new practices, patterns and tools. We’ll approach the road ahead with humility and a commitment to listening, learning and improving every day.

But our own efforts and those of other like-minded organizations won’t be enough. This transformative moment for AI calls for a wider lens on the impacts of the technology – both positive and negative – and a much broader dialogue among stakeholders. We need to have wide-ranging and deep conversations and commit to joint action to define the guardrails for the future.

We believe we should focus on three key goals.

First, we must ensure that AI is built and used responsibly and ethically. History teaches us that transformative technologies like AI require new rules of the road. Proactive, self-regulatory efforts by responsible companies will help pave the way for these new laws, but we know that not all organizations will adopt responsible practices voluntarily. Countries and communities will need to use democratic law-making processes to engage in whole-of-society conversations about where the lines should be drawn to ensure that people have protection under the law. In our view, effective AI regulations should center on the highest risk applications and be outcomes-focused and durable in the face of rapidly advancing technologies and changing societal expectations. To spread the benefits of AI as broadly as possible, regulatory approaches around the globe will need to be interoperable and adaptive, just like AI itself. 

Second, we must ensure that AI advances international competitiveness and national security. While we may wish it were otherwise, we need to acknowledge that we live in a fragmented world where technological superiority is core to international competitiveness and national security. AI is the next frontier of that competition. With the combination of OpenAI and Microsoft, and DeepMind within Google, the United States is well placed to maintain technological leadership. Others are already investing, and we should look to expand that footing among other nations committed to democratic values. But it’s also important to recognize that the third leading player in this next wave of AI is the Beijing Academy of Artificial Intelligence. And, just last week, China’s Baidu committed itself to an AI leadership role. The United States and democratic societies more broadly will need multiple and strong technology leaders to help advance AI, with broader public policy leadership on topics including data, AI supercomputing infrastructure and talent.

Third, we must ensure that AI serves society broadly, not narrowly. History has also shown that significant technological advances can outpace the ability of people and institutions to adapt. We need new initiatives to keep pace, so that workers can be empowered by AI, students can achieve better educational outcomes and individuals and organizations can enjoy fair and inclusive economic growth. Our most vulnerable groups, including children, will need more support than ever to thrive in an AI-powered world, and we must ensure that this next wave of technological innovation enhances people’s mental health and well-being, instead of gradually eroding it. Finally, AI must serve people and the planet. AI can play a pivotal role in helping address the climate crisis, including by analyzing environmental outcomes and advancing the development of clean energy technology while also accelerating the transition to clean electricity.

To meet this moment, we will expand our public policy efforts to support these goals. We are committed to forming new and deeper partnerships with civil society, academia, governments and industry. Working together, we all need to gain a more complete understanding of the concerns that must be addressed and the solutions that are likely to be the most promising. Now is the time to partner on the rules of the road for AI.

Finally, as I’ve found myself thinking about these issues in recent months, time and again my mind has returned to a few connecting thoughts.

First, these issues are too important to be left to technologists alone. And, equally, there’s no way to anticipate, much less address, these advances without involving tech companies in the process. More than ever, this work will require a big tent.

Second, the future of artificial intelligence requires a multidisciplinary approach. The tech sector was built by engineers. However, if AI is truly going to serve humanity, the future requires that we bring together computer and data scientists with people from every walk of life and every way of thinking. More than ever, technology needs people schooled in the humanities, social sciences and with more than an average dose of common sense.

Finally, and perhaps most important, humility will serve us better than self-confidence. There will be no shortage of people with opinions and predictions. Many will be worth considering. But I’ve often found myself thinking mostly about my favorite quotation from Walt Whitman – or Ted Lasso, depending on your preference.

“Be curious, not judgmental.”

We’re entering a new era. We need to learn together.

Tags: , , , ,