Feb 24, 2026 · 5 min read
One Hacker Used AI to Breach 600 Firewalls in 5 Weeks
A Russian speaking threat actor with limited skills used DeepSeek and Claude to compromise FortiGate appliances across 55 countries. Amazon's threat intelligence team documented the entire campaign.
A New Kind of Threat Actor
Between January 11 and February 18, 2026, a single financially motivated threat actor compromised over 600 FortiGate firewall appliances across 55 countries. The campaign was discovered by Amazon's threat intelligence team after they found the attacker's exposed infrastructure, complete with operational logs, AI generated attack plans, and cached prompts.
The attacker was not a sophisticated nation state hacker. Amazon researchers assessed them as having "low to medium baseline skills, heavily augmented by AI." When they encountered patched systems or hardened environments, they frequently failed and simply moved on to easier targets. What made this campaign remarkable was not skill. It was scale.
No Zero Days Required
The campaign did not exploit any vulnerabilities. Instead, the attacker targeted FortiGate management interfaces left exposed on the internet across ports 443, 8443, 10443, and 4443. They attempted to authenticate using commonly reused passwords against devices with single factor authentication.
Once inside, they extracted full configuration backups containing SSL VPN user credentials, LDAP bind accounts, IPsec VPN settings, and network routing information. Every compromised device was misconfigured, exposed with weak passwords and no multifactor authentication.
AI as the Force Multiplier
Two custom tools orchestrated the AI augmented attack chain. ARXON, a Python based Model Context Protocol server, interfaced with commercial AI services including DeepSeek and Anthropic's Claude to generate step by step attack plans, prioritize targets, and produce vulnerability assessments during live intrusions.
CHECKER2, a Go based orchestrator, automated parallel VPN connections and scanning across thousands of stolen configurations. An exposed server revealed operational logs processing 2,516 targets across 106 countries in containerized batches.
Files labeled "claude" and "claude-0" contained task outputs and cached prompts, with a settings file pre approving autonomous execution of credential extraction and lateral movement tools. AI augmentation achieved "operational scale that would have previously required a significantly larger and more skilled team," according to Amazon's CISO.
Pre Ransomware Staging
In confirmed intrusions, the attacker deployed standard penetration testing tools: gogo for port scanning, Nuclei for HTTP vulnerability discovery, BloodHound for Active Directory mapping, and Impacket for NTLM relay and DCSync attacks.
They specifically targeted backup infrastructure, particularly Veeam Backup and Replication servers, behavior consistent with pre ransomware staging. The pattern suggests the stolen configurations were being used to burrow deeper into networks before deploying ransomware.
The Telltale Signs of AI Generated Code
Amazon's researchers noted that the attacker's code exhibited clear hallmarks of machine authorship: redundant comments, fragile parsing logic, and simplistic architecture. The code worked well enough to achieve the objective but lacked the polish of experienced development.
This is the key insight. The attacker did not need to be skilled. They needed to be persistent. AI lowered the barrier to entry for conducting coordinated, multi nation attacks at scale. A single attacker with basic skills compromised 600 devices across 55 countries in five weeks, a scale that would have previously required a well funded team.
What This Means
The commercial AI tools used in this attack were accessed "without apparent detection," raising uncomfortable questions about whether AI providers can effectively monitor for and prevent misuse of their services in real time attack scenarios.
Amazon recommends disabling internet exposed management interfaces immediately, enforcing strong unique credentials with multifactor authentication, isolating backup infrastructure from production networks, and monitoring for post exploitation indicators like DCSync and BloodHound activity.
The era of AI augmented hacking is no longer theoretical. The first documented campaign at this scale shows that the primary defense is not better AI detection. It is better security hygiene.