When to Use Machine Learning: Five questions we ask at Microsoft

| Tap Stephenson, Program Manager at MAIDAP

How MAIDAP evaluates a potential ML project

Our team’s mission is to spread machine learning across Microsoft, and after 50+ projects over three years, we’ve learned an important lesson: You don’t always need ML.

At the Microsoft AI Development Acceleration Program (MAIDAP), we work with teams around the company to incorporate ML into products and services. Teams will approach us with proposals explaining their idea, their data, and their desired outcomes.

Before starting a project, we run a series of checks to confirm a proposal’s feasibility, and we find that these checks are a reliable predictor of project success. As ML continues to spread throughout the tech industry, we wanted to share some of our favorite feasibility checks.

What problem are you solving?

It’s a classic PM question, and it’s especially important for an ML project.

Knowing the problem you’re solving will affect every downstream element of your ML project, from data engineering to model evaluation. Having a precise problem statement ensures a project remains focused.

The best (i.e. most feasible) proposals tend to focus on one single problem, metric, or outcome, and while project goals can vary widely, there are also some common types of problem statements. These types can include:

  • Classification: What type of thing is this?
  • Regression: How much can we expect?
  • Grouping: How should we segment these items?

While these types of problems seem simple, they can be worth millions when solved at Microsoft scale. For example:

  • Classification: Will this user’s PC fail? Identifying failure-prone PCs reduces employee downtime, especially during COVID-19 remote work.
  • Regression: What size virtual machine does this task need? Choosing the right amount of memory helps Microsoft save on compute resources and offer lower prices to customers.
  • Grouping: Are these Azure alerts from the same incident? Grouping notifications helps engineers narrow down root causes faster, making Azure more reliable.

Of course, MAIDAP also sees projects in other areas, like reinforcement learning, recommender systems, interpretability, or responsible AI. Whatever the project area, having the right problem to solve makes every part of an ML project easier, so we try to answer this question before moving onto the rest of our checklist.

two women gathered around a computer

 “After 50+ projects over three years, we’ve learned an important lesson: You don’t always need ML.” — Tap Stephenson, Program Manager at MAIDAP

Will ML solve this problem better than a rules-based solution?

Many teams are enthusiastic about applying ML, but it’s not always the best solution.

ML can solve a lot of different problems, but sometimes it’s more efficient to use hard-coded logic. Features like timestamps and query strings can contain rich information, and it’s possible that a well-designed if-statement works just as well as an advanced model. 

In other cases, an ML-based solution may not be possible. There have been a lot of advances in Natural Language Processing since Clippy, but we’re still a long way away from a generalized assistant on Word.

In our experience, ML tends to perform best when the problem space has the following characteristics:

  • A high number of relevant signals 
  • A complex relation between the signals and the output
  • A small number of metrics to optimize

Not all ML projects will have these characteristics, but we consider it a good sign when we see these patterns in the data.

Do you have access to the data?

This question can be especially difficult at companies like Microsoft, where data often exist in storage but remains inaccessible to protect user privacy. 

Azure is trusted by everyone from governments running cloud services, to hospitals storing patient data, to families sharing calendars in Outlook – so we take this question very seriously. There are often times when we think ML might help improve a product or service, but if we can’t protect our users’ privacy, it simply isn’t worth it.

Over 50+ projects, we’ve found some solutions that safeguard user privacy while allowing ML development. For example:

  • Use Microsoft internal data: Microsoft has over 100,000 employees, which sometimes means our internal dataset is large enough to train a viable model.
  • Partner with an enterprise user: Large enterprises might be interested in an ML approach, and we can sometimes create custom user agreements that allow access to their data in exchange for early access to a model.
  • Scrub data for user information: Some data are highly structured and predictable, making it possible to reliably scrub user information before it enters a training set.

Of course, every project requires a unique review, and teams don’t need to answer this question in isolation. Many companies will have in-house counsel or privacy experts, and we recommend contacting them as early as possible.

Are the data clean?

“Garbage in, garbage out” is repeated for a reason. At Microsoft, we’re lucky to have very high-quality data available, but that doesn’t mean we don’t come across a messy dataset now and then.

In our experience, messy data are most common in systems that have either been developed very quickly (i.e. technical debt) or over very long periods of time (i.e. legacy code). When we encounter these systems, there are a few steps we take to ensure a project’s success:

  • Budget extra time for data exploration and cleaning: Every project will have to complete these tasks, but they require extra time when the dataset is messy. 
  • Determine missingness strategy: Sometimes only a fraction of the data will contain a relevant signal, and it’s important to plan for how your model will interpret missing data.
  • Spend extra time on data validation: Data pipelines shouldn’t just move and clean data; they should also validate a training dataset to ensure its contents match the data scientists’ assumptions.

A messy dataset rarely forces us to abandon a project, but it certainly changes how we plan. With extra attention on data exploration, strategy, and pipelines, the risks of messy data can often (not always!) be mitigated. 

Is there previous work we can build upon?

A common pitfall of ML projects is to assume you need to start from scratch.

Teams can get excited about new state-of-the-art model architectures or training techniques, and given the rapid advancement of ML, there’s always a tempting new method we’d like to try.

Still, for the initial stages of a project, we prefer off-the-shelf training tools and model architectures. There’s always opportunity to refine a model as a project develops, but often we can achieve 80% of a project’s final value by using well-known approaches.

When evaluating an ML project, it’s helpful to understand how off-the-shelf approaches could solve the problem – and then move on to more advanced options. 

Conclusion

As excitement grows around ML, more and more teams are asking whether this technology could improve their product or service. We hope these questions will be a helpful guide, whether you’re at a seed-stage startup or you’re building your first pipeline at scale.

To learn more about MAIDAP, please check out our homepage. Each year we hire a new cohort of recent university graduates studying AI/ML, and if you’re interested in joining our program, please check the Microsoft University Hiring website this fall.

Tags: , ,