Solving the challenge of securing AI and machine learning systems

Today, in collaboration with Harvard University’s Berkman Klein Center, we at Microsoft are publishing a series of materials we believe will contribute to solving a major challenge to securing artificial intelligence and machine learning systems. In short, there is no common terminology today to discuss security threats to these systems and methods to mitigate them, and we hope these new materials will provide baseline language that will enable the research community to better collaborate.

Here is why this challenge is so important to address. Artificial intelligence (AI) is already having an enormous and positive impact on healthcare, the environment, and a host of other societal needs. As these systems become increasingly important to our lives, it’s critical that when they fail that we understand how and why, whether it’s inherent design of a system or the result of an adversary. There have been hundreds of research papers dedicated to this topic, but inconsistent vocabulary from paper to paper has limited the usefulness of important research to data scientists, security engineers, incident responders and policymakers.

The centerpiece of the materials we’re publishing today is called “Failure Modes in Machine Learning,” which lays out the terminology we developed jointly with the Berkman Klein Center. It includes vocabulary that can be used to describe intentional failure caused by an adversary attempting to alter results or steal an algorithm as well as vocabulary for unintentional failures like a system that produces results that might be unsafe.

The taxonomy laid out in “Failure Modes in Machine Learning” informs two other publications we’re releasing today, “Threat Modeling AI/ML Systems and Dependencies” and “AI/ML Pivots to the Security Development Lifecycle Bug Bar.” These two documents build on this taxonomy through the work of the AI and Ethics in Engineering and Research (AETHER) Committee at Microsoft and deliver new threat modeling, detection, mitigation and triage guidance in use today at Microsoft as part of our established security practices.

We hope that these contributions will help to continue to inspire innovative advances in artificial intelligence that benefit society while keeping this technology safe and secure. We welcome feedback from the research community and will continue to work collaboratively with Harvard University and others to help facilitate research in this important field.

Tags: , , , , ,