AI Hacker Breached 600 Firewalls — In the GenAI Era, Even Amateurs Become Hackers

· # AI 보안
FortiGate GenAI threat intelligence AWS IBM X-Force ransomware misconfigurations

On February 20, 2026, AWS Chief Security Officer CJ Moses released an internal threat intelligence report.1 The content was simple yet shocking: A single Russian-speaking hacker or small group had penetrated over 600 FortiGate firewall devices across 55 countries using only commercial generative AI (GenAI) services. The attack period spanned just 38 days, from January 11 to February 18, 2026.

What makes this incident special is that no new vulnerabilities were used. The attacker didn’t exploit FortiOS bugs. Instead, they targeted management ports exposed to the internet and weak credentials protected by single-factor authentication — configuration errors. AI automatically discovered and scaled these basic vulnerabilities, allowing one person to execute operations that would previously have required dozens of team members.

That same day, IBM X-Force released their 2026 Threat Intelligence Index.2 Initial compromise through internet-exposed applications increased 44% year-over-year, and major supply chain breaches nearly quadrupled over five years. The two reports examined different data but pointed to the same conclusion: AI is exploiting foundational security gaps at unprecedented speed.

Campaign Overview: One Person Attacks 55 Countries in 38 Days

The AWS threat intelligence team’s capture of this campaign was fortunate coincidence. The attacker made the mistake of placing malicious toolkits on publicly accessible infrastructure. The same server contained AI-generated attack plans, victim company configuration files, and custom tool source code, all stored without encryption. The worst operational security (OPSEC) ironically provided researchers with complete internal visibility.

The attack’s first stage was large-scale scanning. The attacker’s tooling systematically searched for open FortiGate management interfaces on ports 443, 8443, 10443, and 4443. When management ports were found, login attempts were made with commonly reused credentials. Successfully compromised devices had their entire configuration files extracted.

FortiGate configuration files are golden keys for attackers. A single file contains SSL-VPN user credentials, administrator account information, internal network topology, firewall policies, and IPsec VPN peer settings. The attacker used AI-written Python scripts to automatically parse and decrypt these configuration files, extracting reusable credentials.

Post-compromise internal access occurred via VPN. Once inside internal networks, the attacker ran custom reconnaissance tools written in Go and Python. Tools pulled internal network ranges from VPN routing tables and classified them by size, explored services with the open-source port scanner gogo, automatically identified SMB hosts and domain controllers. Discovered HTTP services were scanned with the open-source vulnerability scanner Nuclei to create prioritized target lists.

Domain takeover used Meterpreter’s mimikatz module. DCSync attacks extracted large volumes of NTLM password hashes from domain controllers. At least one confirmed compromise case showed domain administrator accounts reusing plaintext passwords extracted from FortiGate configuration files. This was followed by pass-the-hash/pass-the-ticket attacks and NTLM relaying for lateral movement.

The final target in attack sequences was backup servers. They particularly focused on Veeam Backup & Replication servers. Backup servers hold credentials for multiple systems, and destroying them fundamentally neutralizes victim organizations’ recovery capabilities. This pattern matches the stage immediately before ransomware deployment.

Configurations, Not Vulnerabilities: AI-Selected Attack Vectors

The security industry has long viewed software vulnerabilities with CVE (Common Vulnerabilities and Exposures) numbers as the center of threats. FortiGate alone has had critical vulnerabilities like CVE-2024-21762 discovered throughout 2024, prompting emergency patching by companies worldwide. However, this attack saw known vulnerability exploits achieve minimal success.

The AWS report explicitly recorded this point. The attacker’s operational notes listed multiple CVEs on attempt lists: CVE-2019-7192, CVE-2023-27532 (Veeam), CVE-2024-40711 (Veeam), etc. But results were repeated failures. Target services were already patched, required ports were closed, or vulnerabilities didn’t apply to the OS versions. In fact, the attacker’s internal reports contained assessments that major infrastructure targets were “well defended with no exploitable vulnerability vectors.”

In contrast, configuration errors — exposed management ports and weak credentials — enabled hundreds of compromises. This is where AI’s role stood out. Vulnerability exploitation requires technical skills to modify and debug code based on target systems. When attackers reached that level, they were repeatedly blocked. However, procedural tasks like detecting configuration errors, extracting credentials, and running standard post-compromise tools could be perfectly supplemented by AI generating step-by-step instructions, writing parsing scripts, and developing reconnaissance tools.

IBM X-Force 2026 report numbers show this pattern isn’t unique to this attacker. Throughout 2025, initial compromise through vulnerabilities or misconfigurations in publicly accessible applications accounted for 40% of incidents, becoming the #1 attack vector. Among these, lack of authentication was the most common weakness.

AI Assembly Line: GenAI Integrated Throughout Attack Stages

What’s particularly notable about this campaign is that AI wasn’t used for just specific attack stages but was integrated throughout the entire process from reconnaissance to report writing. AWS researchers confirmed the attacker was using at least two different commercial LLM services with divided roles.

One served as main tool developer, attack planner, and operational assistant. The other was used as auxiliary attack planner when pivoting within specific compromised networks. In one observed case, the attacker pasted an entire list of internal IP addresses, hostnames, confirmed credentials, and detected services from an active victim organization into an LLM, requesting “step-by-step plans to gain control of systems currently inaccessible with available tools.”

AI-generated attack plans contained step-by-step exploit instructions, expected success rates, time estimates, and prioritized task trees. They even referenced academic research on offensive AI agents. Reconnaissance tool source code bore clear traces of AI authorship: unnecessary comments that repeated function names, simple architectures overly focused on formatting over functionality, and JSON parsing through string matching instead of proper deserialization.

As a result, the attacker’s infrastructure accumulated scripts written in various languages for VPN connection automation, large-scale scanning orchestration, credential extraction tools, and result aggregation dashboards. This volume of custom tooling typically indicates a well-equipped development team. In reality, it was created by one person or very few people through AI-assisted development.

Skill Gaps Filled by AI

The AWS report also clarified that this attacker wasn’t an exceptional hacker. Their technical level was assessed as “low to medium.” The attacker could execute standard offensive tools and automate routine tasks. However, they lacked abilities for compiling custom exploits, debugging failed exploits, and improvised problem-solving.

These limitations appeared throughout operations. When encountering hardened environments or sophisticated defenses, the attacker moved to softer targets instead of persistently attacking. Creative pivots beyond AI-suggested automation paths weren’t even attempted. Their strength lay not in technical depth but in efficiency and scale provided by AI.

From another perspective, this is the core message. If this attacker had been a nation-state-backed APT group, operations of this scale would already have been possible. What’s frightening is that the figure of penetrating 600 devices across dozens of countries was achieved by financially motivated individuals or small teams. Pre-AI, this scale would have required much larger teams, more resources, and longer timeframes.

IBM X-Force 2026 Paints the Bigger Picture

While the AWS report documented one specific campaign, IBM X-Force’s 2026 Threat Intelligence Index shows data that the same trends are accelerating globally.2

The most notable figure is the 44% increase in public-facing application vulnerability attacks. This resulted from the combined effect of absent authentication controls and AI-accelerated vulnerability discovery. X-Force ranked this vector as the #1 initial compromise pathway for 2025. This means the FortiGate campaign isn’t an exceptional incident but part of a broader trend.

Supply chain threats also grew dramatically. Major supply chain and third-party breach incidents nearly quadrupled since 2020. The main cause was attackers exploiting trust relationships in development workflows, CI/CD platforms, and SaaS integrations. AI coding tool proliferation was expected to further pressure code pipelines.

The ransomware ecosystem became more distributed and dangerous. Active ransomware/extortion groups in 2025 increased 49% year-over-year to 109 (from 73 in 2024). The top 10 groups’ share decreased 25%. Lowered technical barriers led to more small-scale, opportunistic operators with structures harder to track than major groups.

AI services themselves became new credential risks. Over 300,000 ChatGPT account credentials were traded on dark web markets throughout 2025. This resulted from infostealer malware operators adding AI services to target lists. When personal ChatGPT accounts sharing passwords with corporate accounts are compromised, those credentials become detours for corporate system penetration. This is structurally identical to the credential reuse attack pattern exploited in the FortiGate campaign.

Defense Checklist for Corporate Security Teams

The defense principles commonly emphasized by the AWS report and IBM X-Force Index aren’t cutting-edge technologies. The vulnerabilities AI automatically discovers and exploits are basic problems known for years.

Block Management Interface Exposure

Management ports (443, 8443, 10443, 4443) of network equipment including FortiGate should never be directly exposed to the internet. Management access should only be allowed through VPN or dedicated management networks.

Mandate Multi-Factor Authentication (MFA)

Remember that among the 600 compromises in this campaign, no FortiGate software vulnerabilities were used. If MFA had been applied to administrator and VPN accounts, compromise would have been impossible even with credential theft.

Protect Configuration Files

When FortiGate configuration files are stolen, internal network maps are completely compromised. Strictly limit access to backup configuration files and verify credential separation to ensure passwords in configurations aren’t shared with other systems.

Isolate Backup Infrastructure

Backup infrastructure including Veeam servers should be isolated from general production networks. Considering the pattern of backup server takeover occurring immediately before ransomware deployment, alert settings detecting backup server access attempts are particularly important.

Shorten Patch Cycles and Prioritize Vulnerabilities

IBM’s figure showing vulnerability exploits account for 40% of incidents reinforces the importance of patch management. Vulnerabilities in public services accessible without authentication are top patch priorities.

Monitor AI Service Accounts

Verify whether AI service business accounts like ChatGPT use the same passwords as corporate emails. AI service credential theft by infostealer malware could become pathways for corporate system penetration.

Posture Management and Continuous Configuration Auditing

IBM X-Force Red penetration test data confirmed misconfigurations as the most common initial access pathway. Automated Cloud Security Posture Management (CSPM) tools should be combined with regular manual configuration audits.

The AI-Security Asymmetry

The implications of this incident don’t stop at the level of “configure FortiGate equipment properly.” Attackers can now automate the entire process of scanning, exploiting, and moving within networks for basic security vulnerabilities using AI. This means all organizations with defense capabilities below a certain level become potential targets.

The attacker described in the AWS report retreated when facing hardened environments. No matter how much AI increases efficiency, fundamentally well-defended systems undermine the economics of automated attacks. The attacker ultimately moved to easier targets.

This is the core challenge now facing corporate security teams. As AI-based attacks become faster and more widespread, organizations with solid foundational security naturally drop in attackers’ priorities. Paradoxically, the most effective way to respond to advanced AI threats is to thoroughly implement old principles: closing management ports, enabling MFA, and applying patches.


Footnotes

  1. Moses, C. J. (February 20, 2026). “AI-augmented threat actor accesses FortiGate devices at scale.” AWS Security Blog. Amazon Web Services.

  2. IBM X-Force. (February 25, 2026). “2026 X-Force Threat Intelligence Index: Making the case for securing identities, AI‑enhanced detection and proactive risk management.” IBM. Press release: “IBM 2026 X-Force Threat Index: AI-Driven Attacks are Escalating as Basic Security Gaps Leave Enterprises Exposed.” IBM Newsroom. 2

← KOSPI at 6300, But Why Can't Rates Fall? — Korean Economy Trapped in K-Shaped Growth Pentagon vs Anthropic — The Day AI Safety's Hard Limits Vanished →