AI agency blog header image.

Agentic AI has an access governance issue…

It all starts innocently enough.

"Ooh, ah, that's how it always starts. But then later there's running and screaming".
- Dr. Ian Malcolm, Jurassic Park [1993]

A team spins up an AI agent to help with ticket triage, cloud clean-up, onboarding tasks, or threat hunting. It needs access to Slack, Jira, the CI/CD pipeline, maybe a bit of AWS or Azure, and perhaps a database because apparently every modern workflow now ends with “and then it touched production.” The agent works. Everyone loves it. It saves time. Nobody wants to be the villain who says no. Happy days.

Then comes the question: who or what, exactly, is in charge once that agent can act?

Here lies the actual problem with AI agency. Not whether the model sounds confident. Not whether the prompt is clever. Not whether the demo impressed the board. The real issue, and it’s a pressing one, is whether the organization still controls identity first access, permissions, approval paths, expiry, logging, and removal when software starts doing the work that once belonged to people.

For CISOs, SOC teams, and cloud security engineers, this is where the conversation needs to get more technical and less ephemeral. Agentic AI is not (mainly) a model governance issue. It is an access governance issue. 

The AI Agency Problem is Really About Authority

NIST’s draft Cybersecurity AI Profile is unusually on point. It recommends that AI agents should have unique identities, their own permissions, and access that is bound in context and time to prevent what it calls “excessive agency.” It also recommends policy checks or human approval for sensitive actions. That’s not abstract ethics language. That is classic identity security and privileged access management translated into AI terms. 

AI agency should never mean unlimited authority. It should mean delegated execution inside clearly defined boundaries.

That matters because most organizations are nowhere near ready. In January 2026, the Cloud Security Alliance (CSA) reported that:

  • 78% of organizations do not have formally adopted policies for creating or removing AI identities.
  • 92% are not confident their legacy IAM can manage AI and non-human identity risk.
  • 79% rate their ability to prevent attacks via non-human identities as low or moderate. 

That‘s not a minor maturity gap. That is the security equivalent of building a motorway but forgetting to mark the lanes. 

And This Gets Messy Quick

Agents rarely live in one place.

A useful agent touches identity providers, chat tools, source control, pipelines, cloud APIs, SaaS apps, and data stores. It may also inherit trust through connectors, plug-ins, service accounts, or workload identities. Every one of those links expands the blast radius if permissions are too broad, long-lived, or poorly reviewed. OWASP now treats non-human identity risk as a top-tier security problem, calling out improper offboarding, overprivileged identities, insecure authentication, secret leakage, and long-lived secrets as core failure modes. 

And this is not theoretical. On 31 March 2026, Unit 42 published research showing how weaknesses around Google Cloud Vertex AI agents could enable compromise paths involving overprivileged AI agents, data exposure, and broader cloud abuse. Once an agent can use tools and cloud permissions, sloppy entitlement design stops being an admin nuisance and becomes an attack path. 

Safe Delegation With Least Friction

For a technical audience, the answer is not “ban agents.” It is to run them like privileged non-human identities with lifecycle discipline.

That means every agent should have a clear owner, a defined purpose, a dedicated identity, minimal permissions, short-lived credentials, logging, review points, and a clean removal path. AWS recommends roles and temporary credentials rather than long-term credentials for both humans and workloads. Google Cloud similarly pushes Workload Identity Federation (WIF) and secure service-account practices to reduce reliance on standing secrets. Microsoft is now treating agents as first-class identities in Conditional Access, which is exactly where this should be heading. 

This is where the practical value lies for defenders. The right operating model makes AI agency easier for the business and less terrifying for the people who‘ll be blamed when it goes sideways.

Making AI Agency Frictionless Without Making it Reckless

The trick’s not to add bureaucracy. It is to move controls closer to the workflow.

The strongest pattern is a platform that gives teams a live view of entitlements across cloud and SaaS, lets people request temporary access in tools they already use, supports approval flows in chat, expires access automatically, and keeps audit-grade logs of who requested what, why, who approved it, and when it was revoked. Add automated provisioning and deprovisioning, policy-driven approvals, and time-bound on-call access, and you are much closer to zero standing privilege without making engineers file ceremonial tickets like it is 2009.

That matters because provisioning and deprovisioning for agents is no longer back-office housekeeping. It is runtime security. If an AI agent can be created in minutes, call APIs at machine speed, and interact with sensitive systems, then its joiner-mover-leaver process has to be just as fast, and far more reliable, than the human version. OWASP explicitly warns that stale or dormant non-human identities become exploitable when offboarding is weak. 

Security Can’t Wait for Legislation

Security leaders can’t wait for a magic future “AI access law” before tightening this up. The control logic already exists. Least privilege, accountability, logging, human oversight, secure-by-design deployment, and lifecycle management are all established expectations. NCSC’s secure AI guidance stresses secure deployment, operation, maintenance, logging, and lifecycle management. The EU AI Act also places weight on human oversight and log retention for higher-risk uses. ENISA’s 2025 threat landscape likewise reinforces identity and privileged access controls as a core and critical line of defense. 

The question’s not whether the machine appears autonomous. The question is whether we still control the authority model around it. If the answer is no, then our organization hasn’t created helpful AI agency. It’s created unsupervised privilege with a nicer user interface.

“The key to a happy life is to accept you are never actually in control.”
- ​​Simon Masrani, Jurassic World [2015]

Agency must stay with us. Humans can delegate tasks. We should not delegate governance.  

If AI agents now sit across our cloud, SaaS, and automation stack, they also sit inside our access problem. In about 30 minutes, you can see every entitlement across users, agents, and non-human identities, strip out standing privilege, and replace it with time-bound, policy-driven access that actually proves least privilege - check out our free trial. The result: AI agency that moves fast, without expanding our blast radius.

Nik Hewitt

Technology

April 7, 2026

Read More Blogs

Don't fall behind the curve

Discover powerful features designed to simplify access management, track progress, and achieve frictionless JIT.

Book a Demo