As AI Agents Become Pervasive, Identity Matters Even More
- Rory Duncan
- Feb 11
- 4 min read
Updated: 2 days ago

A year ago when we started recording our weekly cybersecurity podcast - Cyber Sidekicks - guests on the show didn’t often mention AI agents. Security practitioners mentioned AI in generally positive ways, mostly with regard to the benefits of automation, threat identification, and its potential as an efficient way to process lower-level detection and response tasks.
One year later, the topic of Agentic AI is core to almost all discussions with our guests. It is the most mentioned issue when we ask “what keeps you up at night?”. Something has changed: aspects of AI agents are considered beneficial, but they are also seen as something of a curse, reflecting the capabilities of the technology, while acknowledging the complexities of dealing with AI-armed attackers. The prospect of having semi-autonomous agents perform tasks at a speed and scale that would require many hundreds (or thousands) of security analysts is compelling. Used as a complement to a SOC’s existing capabilities, agents can already improve core detection and response rates, strengthening defensive perimeters and time-to-response commitments.
A New Identity?

But it is those very capabilities - the ability to perform actions independently, to participate in and drive workflow tasks like a fellow team-member would - that magnify potential vulnerabilities for organisations, their security service providers, and customers. Prior to AI agents being used, identity management was already a complex task. With policies in place and multi-factor authentication (MFA) enforced however, human and non-human (e.g. services) identities were somewhat better defined.
With AI agents increasingly becoming the dominant non-human identity, it becomes much more difficult to separate their functions from those that require a degree of context to accurately establish their identity permissions. Some in the security community believe that the old structure has to change from a binary model to a more abstract and contextual one. For complex and tricky threat intelligence triage, there may be a certain amount of risk that SOC analysts must assume on an ongoing basis in order to gain the productivity of their AI agents: treat agents as if they were human, with all the complexity, nuance and unpredictability that goes along with it.
The Agentic Risk
There is no denying the advantages of speed and scale that AI agents can potentially bring to conventional cybersecurity. But does this mean that the “Agentic SoC” is inevitable and something that CISOs should embrace? While AI agents are increasingly being used to bolster defensive capabilities, bad actors are using the same tools to enhance their offensive powers:
Agents are sent to harvest exposed credentials in a fraction of the time that conventional means can.
LLMs are leveraged to automate network/note reconnaissance using malicious code - a form of “LLMJacking”.
Prompt injection attacks are mounted to exfiltrate sensitive or personal information.
MCP servers are tapped at the point where agents are communicating confidential data.
Most recently, OpenClaw’s MoltBook - effectively a social network for AI agents - has been shown to be vulnerable to bot-to-bot prompt injection and data leaks via an exposed API key (amongst other things). If AI agent use is as pervasive as it appears to be, the interactions between them will accelerate exponentially. It could be argued that the assumed risk that human operators will have to accept in order to continue to benefit in terms of AI agent productivity will also increase.
Offense vs. Defense

If bad actors are using the same AI tools as SOC analysts, we can expect faster, more numerous and increasingly complex attacks. At the same time, we are already benefiting from faster, more accurate and extensive threat take-downs. Using a medieval analogy, does this mean that defenders should build their castle walls higher and thicker, or put in more traps and deadly weapons to repel attackers? Both are important. Threat intelligence activities are becoming more offensive in nature: taking a proactive approach to activities such as threat hunting, red teaming and dark web monitoring should be an even more important part of everyone’s security strategy when faced with AI agents used for ill gain. All levels of the technology ‘stack’ are vulnerable - from the cloud, through apps, browsers, APIs, MCP servers, all the way down to the code level. AI tools and agentic AI use means that all points of entry into our digital infrastructures are potentially now open to attack.
The Gist
The debate around the use of AI agents in the context of maintaining robust security is again highlighting the cyclical nature of technology adoption. The same challenges presented themselves when firms migrated workloads to cloud infrastructure, when they started using SaaS apps, and developers interacted with API's. Today however, the challenges are more complex with the adoption of AI and AI agents. As part of a modern cybersecurity strategy, autonomous AI agents can bring significant operational efficiencies, but magnify the risk of vulnerabilities if they are given access permissions that are closer to a human identity. At the same time, new forms of cyber attacks are targeting such agentic behaviour, but restricting agents’ access limits their productivity. A more context-based model for AI agent identity is required - an approach that we already have with human identities. The reality is that bad actors are already using AI agents offensively. It is inevitable that security teams need to use them too - not just defensively but in a much more proactive way - to combat the growing, insidious threat of AI-armed attackers.
You might also be interested in: Stealth AI, Defensive Agents & Quantum Resilience: The 2026 Cybersecurity Battle Lines are Drawn


