How Microsoft used AI to help crack down on tech support scams worldwide

The scam works like this: There you are, using your computer just like any other day, when suddenly a pop-up appears, warning you that your computer has been infected by a virus and you need to call tech support immediately.

If you try to get rid of the pop-up, it just keeps coming back. If you do call the alleged tech support number, you’re connected to telemarketers who claim to be affiliated with major technology companies – but are really scammers trying to bilk customers for costly and unnecessary computer repairs or services.

When the Federal Trade Commission announced a major crackdown on these scammers last month, it was relying in part on the work of a group of Microsoft researchers and Digital Crimes Unit investigators who used artificial intelligence to help unravel the complex web of technical tricks the scammers were using to swindle users and avoid law enforcement.

The scammers weren’t easy to track down.

“These people are very clever,” said Chris White, a principal researcher at Microsoft’s Redmond, Washington, research lab who collaborated with the company’s Digital Crimes Unit to help track down the scammers.

Chris White, principal researcher at Microsoft’s Redmond, Washington, research lab.
Photo credit: Scott Eklund/Red Box Pictures

And they were becoming a real headache for users.

Microsoft’s Digital Crimes Unit, which tracks and prevents cybercrime, receives at least 10,000 complaints a month from around the world about pop-up ads and telemarketers claiming to be legitimate tech support representatives. In general, the scam is more likely to start with a pop-up ad than a phone call, but there are regional exceptions. In Germany, for example, 85 percent of complaints were about tech scams that originated with a phone call.

The majority of the complaints appear to be coming from people 50 and older, but younger users aren’t immune. About 30 percent of the people who filed a complaint and gave their age said they were under age 49, according to the latest data from the Digital Crimes Unit.

Because not everyone reports an attempted attack, experts believe the total number of complaints represents only a small fraction of the people targeted by the scams.

Using AI running in the cloud to find scams
The reports from customers were helpful, but the team was still having trouble catching what White calls the “biggest fish” – the masterminds behind some of these large-scale operations.

That’s because the victims may only have limited information to help in the investigation, such as a phone number that’s been disconnected. Also, few victims capture screen shots of the original scam pop-up.

Finally, the scammers themselves are very good at compartmentalizing their business, separating the telemarketing operation from people building the pop-ups.

“We had a bunch of customers who were reporting scams but didn’t know who scammed them,” said Courtney Gregoire, assistant general counsel for the Digital Crimes Unit.

Courtney Gregoire

To catch the scammers, Microsoft sleuths first had to figure out where the attacks were coming from – no easy task, since they often only used an IP address, or virtual home, for a day or less before moving on to another location to avoid being caught.

To find them, the team created a model that looked for content that behaved in a way that was consistent with the scam, such as creating a pop-up that refreshed in microseconds to give the appearance it wasn’t going away. Then, the team scoured the web for those sites and captured screen shots of all the content that could potentially be a scam.

It would be impractical, if not impossible, to manually scan through the hundreds of thousands of questionable pieces of content they found, so the team turned to a branch of AI called machine learning to sort the data.

With machine learning, a system can learn to recognize something – such as similar words or images – as it’s given more data that shows what it’s looking for. With this project, the team used custom AI tools, running on Microsoft’s Azure cloud computing platform, to look for image similarity, content and other visual clues that would determine the chances that the pop-up was relevant to the fraud investigation.

Then, they used the computer vision API from Microsoft Cognitive Services to scan the ads for phone numbers and other bits of information that could provide clues as to their origin.

Without AI tools and cloud computing, White said they would likely have had to approach the problem slowly and manually, using thousands of employees to document complaints and try to manually figure out whether the data they’d gathered pointed to a pattern.

With the technology, they were able to more quickly track the fast-moving scammers and devote investigator time to higher value work, like finding the connections that could lead to those big fish.

“What we’re able to do is address the problem at the scale it’s happening, and provide the mechanisms for us to do something about it,” White said.

Making data understandable
The AI tools made their job faster, less costly and more precise, but then White and his team had another challenge: How to present their findings in such a way that lawyers and non-tech experts could understand it and decide what action to take.

White has had previous experience with this challenge.  Earlier in his career, he’d been sent to Afghanistan to use data analytics to help with the U.S. defense strategy there. On the ground, he realized that his data wouldn’t help much unless he could find a way to present it that made sense to generals instead of computer scientists. He learned the value of visual tools such as charts and graphs.

With the tech support scam, the team needed to make their findings understandable to lawyers and law enforcement officials. To do that, they used a data visualization tool called Power BI to create interactive, easy-to-understand charts and data visualizations. The data analysis helped law enforcement understand patterns such as how old the users were, what geographic areas the scammers were targeting and which approaches they were taking in those areas.

The findings helped the FTC in its crackdown, and it also helped government officials get a better sense of how the problem was affecting people..

Meanwhile, the scammers are always looking for new methods to dupe users and avoid getting caught, which means White and the Digital Crimes Unit are continuing to improve their methods for tracking and fighting them. Gregoire said the company has a strong commitment to continuing to fight these scams.

“We have a business interest in doing this, and we have a global good interest in doing this,” Gregoire said.

Other applications
The AI and data analysis tools Microsoft is using to track down tech support scammers are similar to the ones that White and his team are using for other applications that require people to scan large amounts of what’s known as unstructured data – things like websites – to look for patterns.

For example, the team has used similar AI tools, combined with Power BI, to help people without technical expertise create systems that can scan online news and social media to understand patterns that may affect their company or organization.

White will be discussing the work his team is doing to help people make sense of big sets of data at the Data Summit in Dublin, Ireland this week.

For White and other researchers, it’s exciting to see the AI technologies and data analysis that were confined to research labs become available and useful to anyone.

“This is a story of a practical application of bone fide machine learning to address an important problem,” he said.

Related:

Allison Linn is a senior writer at Microsoft. Follow her on Twitter.