top of page

The MFA Killer: Why Your AI Assistant is the New Insider Threat and What to Do About it

The AI Productivity Paradox

The corporate gold rush to deploy autonomous AI agents is creating a cavernous and largely unacknowledged security vacuum. In the pursuit of frictionless productivity, organizations are quietly handing the keys to helpers that operate in a logic layer far beyond traditional security controls.


This isn’t a theoretical concern. Looking ahead to 2026, Wendi Whitmore, Chief Security Intelligence Officer at Palo Alto Networks, has warned that autonomous agents are poised to become the new insider threat. Not because they are malicious, but because they are trusted, privileged, and increasingly autonomous.


Just weeks after that warning gained traction, the ServiceNow “BodySnatcher” vulnerability turned prediction into proof.

AI Agents are starting to operate as  our insider sidekicks and increasingly autonomously. Flaws lives in the logic and integration layer & identity controls at the login screen become irrelevant.
Non-human identities are the new insider threat.

The incident demonstrated how an AI agent, operating entirely within expected permissions, could be exploited to impersonate users and trigger cascading access across enterprise systems, without passwords, without MFA, and without ever “logging in” as a human. The flaw didn’t live at the perimeter. It lived in the logic layer.


As discussed in recent episodes of the Cyber Sidekicks podcast, this is the AI productivity paradox in action: the more we trust agents to reason over our data, the more we expose ourselves to a form of insider risk that our existing controls were never designed to stop.


The New Insider Threat: It’s Not Who You Think

Looking ahead, the most significant risk to the enterprise won’t be a rogue employee or a sophisticated phishing ring. It will be the systems we explicitly trust.


These agents are effectively privileged users that don’t know how to say “no.”


They operate inside the firewall, using a human user’s permissions, but without human judgment, skepticism, or contextual restraint. They can transform data, move it across systems, summarize it, enrich it, or expose it without a single malicious command ever being issued.


As Whitmore framed it, executives now face the challenge of securing an expected surge of autonomous agents that operate and reason over corporate data on behalf of users.


This flips the security paradigm on its head. We are no longer just hunting bad actors but governing autonomous logic. And we'd better do it fast.


The ServiceNow Vulnerability: A Masterclass in Integration Failure

That abstract risk crystallized in late 2025, when ServiceNow delivered what may become the canonical case study in agentic security failure.


Researchers characterized the flaw—later tracked as CVE-2025-12420—as one of the most severe AI-driven vulnerabilities uncovered to date. Not because the AI “thought wrong,” but because of how it was wired.


Do we know where all our legacy systems reside and what they contain?
Do we know where all our legacy systems reside and what they contain?

ServiceNow had bolted its modern Now Assist agentic AI onto a legacy virtual agent chatbot framework that was lightly guarded and broadly trusted. This is the

danger of shadow legacy systems: building 2026-grade intelligence on 2016-era plumbing.





When sophisticated agents are layered over old, under-secured backends, those legacy interfaces become high-speed backdoors into the enterprise orchestration layer.


This wasn’t a model problem. It was an architectural one.

This is the danger of shadow legacy systems: building 2026-grade intelligence on 2016-era plumbing.

The exploit’s elegance is what makes it so unsettling. Attackers could impersonate a legitimate user—and in some cases seize full platform control—using nothing more than a valid email address.


This is a strategic nightmare that renders traditional security staples like passwords and Multi-Factor Authentication (MFA), completely irrelevant. For more than a decade, security leaders have sold MFA to boards as the silver bullet. This vulnerability proves that when the flaw lives in the logic and integration layer, identity controls at the login screen become irrelevant.

MFA protects authentication. It does not protect delegation.

Once an agent is authorized to act, identity becomes an input variable, not a gate. The vulnerability didn’t break MFA, it simply bypassed the entire concept by operating above it. This is the true meaning of “MFA killer.” Not that MFA is useless, but that it was never designed to govern autonomous reasoning in the first place.


The Domino Effect: From ServiceNow to the Enterprise

In a modern SaaS environment, logic failures rarely stay contained. They propagate through the orchestration layer.


Dominoes lined up on a wooden surface, with a hand poised to knock them over much like the Body Snatcher vulnerability gained access across orchestration layers.
An orchestration layer provides opportunity for the perfect domino effect.

ServiceNow isn’t just another application—it’s a coordination hub. When an agent operating inside it is compromised or abused, the blast radius extends immediately outward into connected systems.


In this case, researchers highlighted potential downstream exposure across:

  • Salesforce environments

  • Microsoft ecosystems (including identity, collaboration, and productivity services)


This is why agent breaches are never “local incidents.” A single flaw in how an agent reasons, invokes tools, or chains actions can trigger a domino effect that compromises multiple pillars of the digital enterprise simultaneously.


The agent doesn’t need to move laterally. The platform already has.

Why This May Keep Happening: Agents Live Above Our Controls

What the ServiceNow incident made painfully clear is that we are securing the wrong layer: Agents don’t behave like users. They don’t authenticate like applications. They don’t respect boundaries the way humans do.


In December 2025, OWASP formalized this shift with its Top 10 for Agentic Applications, naming risks like privilege abuse, tool misuse, and goal hijacking as primary failure modes.


That list reads less like a future warning and more like a post-incident report because, as OWASP contributors understand, this is happening now. The highest-impact agent failures aren’t model hallucinations—they are authorization without friction, capability without constraint, and reasoning without oversight.


Securing the Autonomous Frontier

We are entering a transition from traditional software breaches to agentic failures where damage occurs not because someone broke in, but because something (let's recall it is not a someone please) was allowed to act too freely.


As autonomous agents become the default interface to corporate data, security programs must evolve from guarding doors to governing decisions.

Close-up of a statuesque - possibly non-human - figure with a weathered expression, wearing a textured beanie and scarf.
Is this identity human? Hard to tell.

That means:

  • Treating agents as non-human privileged identities, with explicit ownership, scoped permissions, and lifecycle management

  • Enforcing supervised execution for high-risk actions like data export, permission changes, or cross-platform orchestration

  • Segmenting agent tools the way we segment networks—no universal toolbelts

  • Instrumenting the orchestration layer to detect abnormal chaining, not just credential misuse

  • Auditing “shadow legacy” interfaces that can invoke modern agent workflows


This is not about slowing innovation. It’s about recognizing that autonomy is authority. And, we should pay attention to shared ownership of AI tools and "shadow" relationships this broad ownership can engender. Just like the early days of SaaS where everyone and their pet monkey stood up new services, everyone is again standing up AI and agentic AI capabilities.


The Insider Threat Has Changed Faces

When Wendi Whitmore warned that autonomous agents could become the new insider threat, the implication was subtle but profound: the next major security failures wouldn’t come from compromised credentials or disgruntled employees—but from delegated intelligence operating exactly as designed.


The ServiceNow breach confirmed that reality.


Here was a privileged AI assistant, embedded inside the enterprise, capable of acting across systems, and vulnerable not because MFA failed but because MFA was never in scope. The exploit bypassed authentication entirely by abusing trust, delegation, and integration logic.

Identity controls held. The reasoning layer did not. That is the uncomfortable truth security leaders must now confront.

In 2026, the most dangerous “insider” may not be a person at all. It may be a well-intentioned agent with too much authority, too little supervision, and unfettered access to the orchestration layer that binds the enterprise together.


The Question Boards Should Be Asking

As you evaluate your organization’s roadmap for 2026, the critical question is no longer:

“Do we have MFA?”


It is: “Can one of our AI agents cause catastrophic damage without ever logging in?”


The ServiceNow incident suggests the answer is already yes. And that means the insider threat has officially changed faces.

bottom of page