AI Strengthens Cybersecurity Defense More Than It Advances Threats

Cybersecurity is a potent national security threat. The 2025 Annual Threat Assessment of the U.S. Intelligence Community reports that “a range on cyber and intelligence actors are targeting our wealth, critical infrastructure, telecom, and media,” including China, Russia, Iran, and North Korea. 

In the U.K., the National Cyber Security Centre states that artificial intelligence “will almost certainly continue to make elements of cyber intrusion operations more effective and efficient, leading to an increase in frequency and intensity of cyber threats.”

However, it’s not all bad news. Lennart Maschmeyer, a cybersecurity expert in Georgia Tech’s Jimmy and Rosalynn Carter School of Public Policy, finds that advances in artificial intelligence strengthen cybersecurity defenses more than they enhance threats. He also reports that as the stakes of a cyberattack grow, the less the offense benefits from incorporating AI. Most notably, Maschmeyer writes, “If this theory is right, as the current state of evidence suggests, this gap is likely to persist.” 

His paper is forthcoming in the winter issue of International Security.

“I don't think AI is going to be as revolutionary on the offensive side as many expect, but rather more useful for the defense,” Maschmeyer said. “Algorithms have gotten really good at pattern recognition, making intrusion detection more effective. Generative AI models have gotten good at writing code, but when it comes to creativity and deception — both key to advanced cyber threats — they struggle.”
 

Earn an Online Master's in Cybersecurity at Georgia Tech
 

Pattern Recognition Aids Cyber Defense

AI is a catch-all term for many types of technology, one of the most common of which is machine learning. Machine learning algorithms analyze patterns in data to make predictions.

On the defensive side, machine learning aids cybersecurity by spotting outliers or anomalies that signify something is amiss with great accuracy. By outsourcing low-level detection to reliable algorithms, humans have more time to deal with the high-level threats that are using new techniques and deceiving people, Maschmeyer said.

On the offensive side, cyberattacks require creativity, which is not one of the technology’s strong suits. Generative AI models, such as ChatGPT, use machine learning to predict which letter and word should come next based on the millions of texts they were trained on. That means they’re not good at coming up with new ideas, even if they may seem like they are. These models are also easy to deceive and unpredictable, Maschmeyer said.

“So for these reasons, I see great benefits on the defensive side, while on the offensive side, they’re not really that clear,” he said.
 

Cybersecurity and Subversion

Maschmeyer also finds that as the potential impact of a cyberattack rises, the value of relying on AI declines.

In his course on Cybersecurity and Subversion in the Carter School, Maschmeyer compares cyberattacks to spies — two national security threats that have a lot in common. In fact, Microsoft reports that the United States received the most frequent cyber adversary attacks of any country in 2025.

“Computer viruses and spies are both hidden undercover in a system, and they both manipulate it to do things it's not supposed to,” Maschmeyer said. “But just like with spies, causing significant harm to an opponent is much easier in theory than in practice.”

In reality, cyber operations are extremely unpredictable and deploying them at a larger scale only exacerbates that. Add generative AI to the mix, which is non-deterministic (meaning it produces different outputs for the same inputs), and there is an additional layer of uncertainty.

“So if you think about it in those terms, the more ambitious your attack, the less the payoff of using AI and the greater the risk something goes wrong,” Maschmeyer said. “Because of these trade-offs, AI is actually least likely to revolutionize the high end of cyber conflict, namely interstate conflict.”
 

Large vs. Small-Scale Cyberattacks

It’s important to note that Maschmeyer specializes in large-scale cyberattacks aimed at crippling governments and threatening national security, not the small-scale phishing emails we get at work.

He says that in the individual domain, generative AI’s ability to speed up cybercriminals’ work, write malware, and craft compelling social phishing narratives might make cybersecurity attacks more common and damaging for individuals and organizations — “Unless everyone heightens their defenses and becomes more aware."
 

The Social Side of Cybersecurity

Why are cybersecurity and AI, topics more commonly associated with computing and engineering, taught in Georgia Tech’s Ivan Allen College of Liberal Arts?

Because it’s a problem for the whole of society, Maschmeyer said. Cybersecurity is a public policy responsibility of the government, tasked with protecting people, enacting regulations, securing government systems, and identifying potential targets that may be vulnerable or attractive to bad actors.

“Computer science departments are very good at the technical vulnerabilities of cybersecurity,” Maschmeyer said. “But when it comes to the social and political consequences of actors who exploit these technical vulnerabilities, that moves into the expertise of people on my side of the equation.”
 

Earn an Online Master's in Cybersecurity at Georgia Tech
 

More on AI in the Ivan Allen College of Liberal Arts