Building AI responsibly from research to practice
The speed at which artificial intelligence (AI) technologies have improved in competency and moved from the lab into mainstream applications has surprised even the most seasoned AI experts. Despite the progress, the practice of AI is still new and hard to do. This creates an interesting dynamic: AI practitioners are learning new AI skills as they’re building AI applications. There are many opportunities to learn and improve.
Microsoft’s AI principles call out the aspirations of designing our systems in accordance with goals of fairness, reliability and safety, privacy and security, inclusiveness, transparency and accountability. But principles are only a starting point. Today, in collaboration with Boston Consulting Group (BCG), we introduced guidelines for product leaders designed to help prompt important conversations about how to put responsible AI principles to work. This guidance is distinct from Microsoft’s internal processes but reflects perspectives from both organizations.
We also recognize that principles and guidelines must consider engineering realities. We need new kinds of engineering tools aimed at helping system developers to better understand and refine AI technologies. One of the challenges facing both experienced and inexperienced AI practitioners is that harms can surface inadvertently in AI applications and systems. Adverse behaviors and influences of AI applications can vary from trivial to deeply consequential. Flaws may remain concealed, or only be uncovered after applications are deployed, because they hide in algorithms, models, data and even assumptions. It can be a race against time to mitigate problems.
While growing numbers of tools and platforms have been available to help practitioners build AI applications, instruments to help engineers figure out what might go wrong are scarce. Several years ago, our AI Ethics in Engineering and Research (Aether) team and Microsoft Research recognized the need for a new class of tools and has coordinated and supported their creation. Frontier research has been important in studying the challenges with building and fielding AI systems, recognizing what needs to be done to build AI responsibly and developing methods to address potential failures.
To take the promising prototypes we built into practice, we relied on both research and engineering skills to develop robust, accessible tools that can be adopted by those who need them most. Our efforts have resulted in open-source tools to help ML practitioners identify issues, diagnose causes and mitigate problems before deploying apps. These tools include:
- Error Analysis: Analyzes and diagnoses model errors
- Fairlearn: Assesses and mitigates fairness issues in AI systems
- InterpretML: Provides inspectable machine-learned models to enhance debugging of data and inferences
- DiCE: Enables counterfactual analysis for debugging individual predictions
- EconML: Helps decision-makers deliberate about the effects of actions in the world using causal inference
- HAX Toolkit: Guides teams through creating fluid and responsible human-AI collaborative experiences
The tools continue to evolve. Today, we also announced the Responsible AI dashboard (Figure 1) that surfaces Error Analysis, Fairlearn, InterpretML, DiCE and EconML functionalities into one pane of glass to assist AI developers with the fairness, interpretability and reliability of AI models. Within the dashboard, the tools can communicate with each other and show insights in one interactive canvas for an end-to-end debugging and decision-making experience.
We’ve made good progress on understanding and mitigating technical and sociotechnical issues with deploying AI in the open world—but there’s much more to do. Moving from principles to practices is difficult given the complexities, nuances and dynamics of AI systems and applications. There are no quick fixes and no silver bullet that address all risks with applications of AI technologies. But we can make headway by harnessing the best of research and engineering to create tools aimed at the responsible development and fielding of AI technologies.
We invite you to learn more about the Responsible AI dashboard and to contribute to its development.
Learn more
- Put Responsible AI into Practice event
- Ten Guidelines for Product Leaders to Implement AI Responsibly
- Responsible machine learning
Top image: Photo courtesy of Microsoft.