9 ways hackers will use machine learning to launch attacks

Machine learning and artificial intelligence (AI) are becoming a core technology for some threat detection and response tools. The ability to learn on the fly and automatically adapt to changing cyberthreats give security teams an advantage.

However, some threat actors are also using machine learning and AI a to scale up their cyberattacks, evade security controls, and find new vulnerabilities all at an unprecedented pace and to devastating results. Here are the nine most common ways attackers leverage these technologies.

1. Spam, spam, spam, spam

Defenders have been using machine learning to detect spam for decades, says Fernando Montenegro, analyst at Omdia. “Spam prevention is the best initial use case for machine learning,” he says.

If the spam filter used provides reasons for why an email message did not go through or generates a score of some kind, then the attacker can use it to modify their behavior. They’d be using the legitimate tool to make their own attacks more successful. “If you submit stuff often enough, you could reconstruct what the model was, and then you can fine-tune your attack to bypass this model,” Montenegro says.

It’s not just spam filters that are vulnerable. Any security vendor that provides a score or some other output could potentially be abused, Montenegro says. “Not all of them have this problem, but if you’re not careful, they’ll have a useful output that someone can use for malicious purposes.”

2. Better phishing emails

Attackers aren’t just using machine-learning security tools to test if their messages can get past spam filters. They’re also using machine learning to create those emails in the first place, says Adam Malone, EY partner, Technology Consulting. “They’re advertising the sale of these services on criminal forums. They’re using them to generate better phishing emails. To generate fake personas to drive fraud campaigns.”

Copyright © 2022 IDG Communications, Inc.

News Credit

%d bloggers like this: