AI Agents, Context & Governance: Complex but Critical for Risk Management
- Rory Duncan

- Feb 26
- 4 min read

The last twelve months has seen many new developments in the cybersecurity industry, but one that stands out is the arrival of AI agents. The fact that these semi-autonomous, task-focused virtual machines are now “in the wild” is both alarming and exciting. AI agents are here and being used extensively by technology vendors, service providers, and enterprises across most industry sectors. While many are still getting to grips with adoption and deployment, early adopters are facing the challenges of any new technology: management, governance and compliance.
While regulations governing the use of AI have lagged its most recent advances - GenAI, bots, semi-autonomous agents etc. - the security implications of unregulated use are becoming known. AI agents can offer task completion at a scale and speed that humans cannot. At the same time, such powerful capabilities open new attack surfaces that security teams need to be ready for. Treating AI agents as non-human identities in the way that we consider service accounts is not enough. We need to start treating AI agents in much the same way as we do human beings: as complex entities that require careful management, nuanced governance and real-time monitoring.
New solutions for a new era?

Security professionals have been using Governance, Regulatory & Compliance (GRC) tools to manage and report on network, application, cloud and on-premises infrastructure for many years. As a broad category of activities, governance does not require a regulatory environment to function, but it is the precursor to the latter being successful. Governance is something that any organisation can apply to any part of its infrastructure, including AI tools and agents, without necessarily creating a compliance framework.
While governance is often mentioned in the same phrase as compliance, AI and AI agents have developed far ahead of the ability of regulatory bodies to catch-up. Many large and/or established vendors - such as IBM, Microsoft, Palo Alto Networks, SailPoint, DataDome, and others have added AI agent management and governance capabilities to their AI security platforms. These solutions tend to be integrated with the vendor’s overall GRC framework and will focus on holistic approaches to security operations. It makes sense for these vendors to add AI agent governance to the management of AI security. Others however are focusing on the specific requirements of AI agents and architecting new approaches, using human psychology, behavioral analytics and context-specific permissions.
When managing AI agents, it is increasingly clear that context is the key to both productivity and risk mitigation. Context covers all the information sources, prompt history, documents etc. that allow the Large Language Model (LLM) to perform its task-specific requirements as accurately and effectively as possible. Context can be complex, incorporating rules-based behavior, compliance requirements, corporate policies and so on. In theory, the more context - of the correct kind - that can be given, the better the AI agent will provide results. The use of Retrieval-Augmented Generation (RAG) is especially important for AI agents, allowing them to connect with trusted, up-to-date domain-specific sources prior to response generation. It may seem that this level of context is overly cumbersome and unnecessary, but the enhanced capabilities of AI agents means that careful understanding of their requirements, behavior and potential interactions is critical.
Context is Everything
This year’s RSAC Innovation Sandbox Top 10 Finalists include two companies that offer tools to help companies govern AI agents at a much more detailed level:
Geordie
Geordie provides tools for AI agent governance by helping customers gain a deep understanding of agent behaviour and providing tools to develop detailed context requirements. Geordie describes a fundamental shift: while standard software is an execution engine for human choices, AI agents combine LLMs with real-world tools to make independent choices based on a task brief. In turn, Geordie believes that this shift changes the way risk is manifested in AI agents, given the minute-to-minute changes in the tools they have access to and the context they are currently processing. The solution is contextual - as opposed to static - governance: deep configuration awareness, behavioural observability over time, and real-time, scenario-based interventions.
Token Security
Token Security considers AI agents to be a new form of identity threat, but the firm’s goal is to help companies adopt them - mitigating these threats by treating AI agents as “first-class” identities. Token’s offerings cover established non-human identities such as API tokens and service accounts but lean strongly into the classic identity management approach to AI agents, prioritizing visibility, ownership attribution, control, and governance. Like Geordie, Token believes that context is key, describing how it applies intent-aware, least-privilege to AI agents, with the intention of ensuring that agents have only the permissions needed for their purpose, and only for the time required.
The Gist
Management of non-human identities has evolved in response to the rapid adoption of semi-autonomous AI agents. The complex nature of their interactions with both human operators and other agents means that security teams need a more sophisticated approach that uses behavioural context to determine an agent’s permissions. Leading security vendors are adding advanced AI discovery and management capabilities to their platforms, but several specialist firms are developing sophisticated AI agent governance products. Firms such as Geordie and Token Security are leveraging an identity management approach mirroring the complexities of human operator requirements, but at a vastly bigger scale and in near real-time. Implementing context-specific permissions that can change on a minute-by-minute basis seems unimaginable - especially if many hundreds of different agents are being monitored. For some of these finalists at this year’s RSAC Innovation Sandbox awards, confidence is high that this can be achieved. Watch this space!

Comments