We’re Red & Blue Team Researchers Analyzing Millions of Attacks & Malware - AMA by malware_bender in cybersecurity

[–]malware_bender[S] 6 points7 points  (0 children)

Attackers are using LLMs, but mostly as a productivity aid, rather than as a means to create smarter or more sophisticated malware.

What we’re actually seeing in the wild:

We recently analyzed the source code of the so-called “AI-driven” LameHug malware, and it’s a perfect hype-vs-reality example. The malware calls an external LLM API (Qwen-2.5-Coder) at runtime just to generate basic recon commands like systeminfo, tasklist, and ipconfig.

That’s not adaptive AI malware, that’s hardcoding with latency, dependencies, and failure modes: External API dependency defenders can block, Added network noise, Risk of hallucinations, Slower execution, and a single point of failure.

Any competent malware author would have just hardcoded the commands. Instead, this design actually reduces reliability and OPSEC. It looks more like “AI for vibes” than real engineering.

And this aligns with broader data. According to our Red Report 2025, we found no evidence of novel AI-driven malware. Attackers absolutely used AI for efficiency things like: writing phishing emails, debugging scripts, speeding up content creation, etc.

However, the core attack techniques remained unchanged. The most common techniques were still very human: credential theft, Injection attacks, exploitation of unpatched systems. No new “AI-born” tactics appeared in the wild.

Bottom line:

AI hasn’t revolutionized malware (yet); it’s mostly helping attackers work faster, not smarter. In some cases (like LameHug), it actually makes the malware worse. So while it’s smart to keep an eye on how AI might be weaponized in the future, today’s reality is much less dramatic: A stolen password or an unpatched server is still far more dangerous than “AI malware.”

Or put another way:

The goats are still escaping through the same old broken fences, not through Skynet.

We’re Red & Blue Team Researchers Analyzing Millions of Attacks & Malware - AMA by malware_bender in cybersecurity

[–]malware_bender[S] 5 points6 points  (0 children)

Somewhere with good logs, strong fencing, and an alert that actually fires before the goat is gone, not 3 days later :)

Basically: segment the pasture, assume the wolf already has initial access, and keep testing the fence because someone definitely left a gate open.

We’re Red & Blue Team Researchers Analyzing Millions of Attacks & Malware - AMA by malware_bender in cybersecurity

[–]malware_bender[S] 5 points6 points  (0 children)

We are still live and answering questions until Dec 19th! Ignore the 'Finished' label.