Light bulb Limited Spots Available: Secure Your Lifetime Subscription on Gumroad!

Mar 17, 2026 · 5 min read

Half of All Cybersecurity Incidents Will Involve AI by 2028—Most Teams Aren't Ready

Gartner warns that custom AI apps are being deployed faster than security teams can test them, and 75% of regulated companies risk fines.

The Rush to Ship AI Is Outpacing Security

Across nearly every industry, companies are racing to build and deploy AI powered applications. The competitive pressure is real: organizations that move slowly risk falling behind rivals who have already automated workflows, launched intelligent products, and embedded machine learning into their core operations. But in that race to ship, a critical discipline is being left behind—security.

According to research from Gartner, 50% of all enterprise cybersecurity incident response efforts will focus on incidents involving custom built AI applications by 2028. That is a dramatic shift from today's threat landscape, and most organizations are nowhere near ready to handle it.

A chart illustrating Gartner's prediction that 50% of cybersecurity incidents will involve AI by 2028

What the 50% Prediction Actually Means

When Gartner says half of incident response efforts will involve AI by 2028, the implication is not simply that attackers will use AI to launch more sophisticated attacks—though that is certainly part of the picture. The more important and often overlooked dimension is that the AI systems organizations build and deploy internally are themselves becoming attack surfaces.

Custom built AI applications—tools trained on proprietary data, integrated into business workflows, and given access to sensitive systems—introduce risks that traditional security frameworks were never designed to address. A model that behaves correctly during testing may behave in unexpected ways once exposed to real world inputs. A pipeline that processes customer data may inadvertently leak information through model outputs. An AI assistant integrated with internal databases may be manipulated through carefully crafted prompts.

Christopher Mixter, VP Analyst at Gartner, put it plainly: "AI is evolving quickly, yet many tools—especially custom built AI applications—are being deployed before they're fully tested." The result is a growing inventory of inadequately secured systems operating at the heart of enterprise infrastructure.

Why Custom AI Applications Are a Security Nightmare

Traditional software has a relatively stable attack surface. A web application serves predictable requests, handles known data formats, and behaves consistently across inputs. Security teams have mature playbooks for testing these systems, identifying vulnerabilities, and responding to incidents.

Custom AI applications break nearly every assumption those playbooks depend on. Three properties make them particularly difficult to secure:

  • Complexity. Large language models and machine learning pipelines involve billions of parameters, opaque internal representations, and emergent behaviors that cannot be fully characterized through conventional code review or penetration testing.
  • Dynamic behavior. Unlike static software, AI models can behave differently depending on how they are prompted, what context they receive, and how their outputs feed into downstream systems. A system that appears secure today may respond to tomorrow's inputs in ways no one anticipated.
  • Deployment speed. Business units are building and shipping AI tools faster than security teams can evaluate them. By the time a risk assessment is complete, the system may already be handling production traffic.

Gartner notes that these systems are "complex, dynamic and difficult to secure over time"—and that most security teams still lack clear processes for handling AI related incidents when they occur.

The Compliance Time Bomb Facing Regulated Industries

For companies operating in regulated industries—finance, healthcare, insurance, telecommunications—the stakes extend well beyond operational risk. Gartner projects that manual AI compliance processes will expose 75% of regulated organizations to fines exceeding 5% of global revenue.

That is a staggering figure. For a company generating a billion dollars in annual revenue, a 5% fine translates to fifty million dollars. For larger enterprises, the exposure runs into the hundreds of millions.

The problem is structural. Compliance teams that rely on manual processes—spreadsheet based inventories of AI systems, periodic audits, ad hoc risk assessments—simply cannot keep pace with the rate at which new AI tools are being introduced. By the time a compliance review is completed for one system, ten more may have been deployed. Regulators, meanwhile, are moving quickly. The EU AI Act is already in force. Sector specific guidance from financial and healthcare regulators is expanding. Companies that cannot demonstrate ongoing oversight of their AI systems will find themselves in an increasingly untenable position.

What Security Teams Need to Do Now

The two year window before 2028 is short, but it is not empty. Organizations that act now can build the foundations needed to manage AI security at scale. Gartner recommends that by 2028 more than 50% of enterprises will use dedicated AI security platforms to secure both third party AI service usage and custom built AI applications—suggesting that purpose built tooling, not adapted legacy security stacks, is the path forward.

Practically, security and development teams should be working together on several fronts:

  • Build an AI inventory. You cannot secure what you do not know exists. Teams need a complete, continuously updated record of every AI system in production, including tools built by individual business units outside of formal engineering processes.
  • Define AI specific incident response procedures. Existing runbooks for data breaches or malware infections do not map cleanly onto AI incidents. Teams need new procedures that account for model poisoning, prompt injection, output manipulation, and other AI specific attack vectors.
  • Shift security left in the AI development lifecycle. Security review should not begin after an AI application is ready to ship. Risk assessment, threat modeling, and adversarial testing should be integrated into the build process from the start.
  • Automate compliance monitoring. Manual processes will not scale. Automated tools that continuously monitor AI systems for policy violations, data handling anomalies, and behavioral drift are essential for regulated organizations.

The Race Between AI Deployment and AI Security

The fundamental tension here is not new. Security has always struggled to keep pace with the speed of technology adoption. But AI represents a qualitative shift in that challenge, not merely a quantitative one. The systems being deployed are not just faster or more capable versions of familiar software—they are architecturally different, behaviorally unpredictable, and deeply embedded in the decisions and workflows that drive enterprise operations.

The organizations that will navigate 2028 and beyond successfully are not those that slow down their AI adoption. They are those that build security capability at the same pace as deployment capability—treating AI security as a core engineering discipline rather than an afterthought, and investing in the tools, processes, and talent needed to stay ahead of the risk.

The Gartner prediction is a warning, but it is also a roadmap. Half of all cybersecurity incidents involving AI by 2028 is not an inevitable outcome—it is a projection based on current trajectories. Organizations that change their trajectory now can change the outcome.