Responsibility: Responsible AI in action
At Microsoft we believe AI solutions built responsibly not only ensure inclusion, fairness and transparency, they empower you to create better products and experiences. In this episode you’ll hear experts inside and outside of Microsoft share learnings from their own experiences to help you innovate responsibly as you develop your own AI systems.
Listen on your favorite podcast player:
Transcript excerpt (read the full transcript)
David Carmona: In a previous episode of this podcast, I told a story of my father, how he was a carpenter at heart and that made him better at his business. What I didn’t mention though, is that he also wanted me to be a carpenter and follow his path, and we all know how that ended, I’m now presenting this podcast about AI. I’m now making the same mistake with my son, Guillermo. I really want him to learn computer programming, and that’s why I was so happy when he was ten and he told me that he wanted to learn Python. However, he told me something that made me think. Guillermo, do you remember what you told me?
Guillermo Carmona: Yeah, I think I told you that you had to be with me the entire time when I was learning.
David: Yep, that’s exactly what you told me. And why did you tell me that?
Guillermo: I was scared that I would create an AI that could turn against me or something.
David: Yeah, I remember that you told me that. I know that it was partly a joke, but behind it there was something that we should all think about. Guillermo, I appreciate that you joined me for this for this podcast. How do you feel about AI now?
Guillermo: Well, now I feel much better about the situation. Back when I was a kid, my imagination was at an all-time high and since there were all these movies about AI that I watched, I could imagine a world with Skynet taking over. But now that I know more about programming, I know that it’s impossible that I create something like that by myself. So, please tell everybody that they should be responsible while developing AI, you know, just in case.
David: Thank you, Guillermo. I’ll do that. That’s exactly what I’m planning to do in this podcast. Thank you so much, Guilli, I’ll see you at dinner.
Guillermo. Okay, you’re welcome. Goodbye. Thank you.
David: So, welcome to the AI Business School Podcast from Microsoft. I’m David Carmona and I have three kids, one of them you just met, who are scared of their own father’s job. This may seem funny, but actually it reveals something deeper that is happening in society. AI is advancing very quickly, and it’s imperative that we can do it in a way that can earn society’s trust. Without trust, AI won’t have any meaningful impact. As business leaders, it is critical that we develop AI responsibility, so our customers, our employees, or our 10-year-old kids can trust it. We’re going to have that conversation today. We’ll start with Natasha Crampton, Microsoft’s Chief Responsible AI Officer who leads the company’s recently formed Office of Responsible AI, which specifically addresses these issues.
Natasha Crampton: I think it’s critically important that we treat responsible AI like we have treated privacy and security and we see responsible AI as a core element of trust. We know that people don’t use technology that they don’t trust, and so making sure that we are baking in responsible AI considerations when we’re building the technology, also when we’re deploying the technology, is really just an essential part of unlocking the value of these promising new AI technologies.
David: The responsible AI journey starts by acknowledging the challenges of AI. Like any other technology breakthrough in the past, AI comes with associated challenges and risks. And in the case of AI, it comes with very unique challenges that make even more important that we understand them.
My colleague, Sarah Bird, who was Microsoft’s leader of Responsible AI for Azure Machine Learning, can take us on a deeper dive.
Sarah Bird: Both the power of the technology and the way in which we’ve seen it fail has led to a push around this responsible AI space and really adopting techniques to mitigate these failures and detect these issues up front. But it also started a much bigger conversation about ethics and society and technology.
David: Our attitude toward this conversation should be one of curiosity and growth mindset. We have to ask the difficult questions, even if we don’t have the solutions yet. Sometimes it’s not about what technology can do, but what it should do, according to Natasha.
Natasha: It’s really important to have very open lines of communication in the beginning. I think to approach the engagement with humility and to be candid about the fact that we increasingly know what the questions are, but we don’t always know what the solutions are. And also just to recognize that different people will have different motivations for being involved in this work.