Single cloud or multi-cloud, are the current controls in Azure/AWS/GCP enough for AI agent management?
At first, AI agents look like a pretty basic model problem. Pick our platform. Wire up a few tools. Give the thing access to a ticketing system, a knowledge base, maybe a cloud account, and off it goes, cheerfully “helping” at machine speed. Rocket science it ain’t.
Then our security engineer slides into the chat, takes a sip of their coffee, and asks a blunt but necessary question: what identity is this thing using, what can it touch, and who’s going to explain that to our auditors six months from now?
Sure, the large cloud providers are all moving fast. It’s a brave new world and they’re stepping up to the challenge. AWS, Microsoft, and Google now offer some serious agent tooling, managed runtimes, and growing observability. But the hard problem isn’t simply running agents. It’s governing them as identities with permissions, dependencies, tool access, and lifecycle events across our cloud and SaaS estates—the who, what, where, when, and how so beloved by international cybersecurity standards. That’s why multi-cloud AI adoption is becoming as much an access governance problem as an AI platform decision.
AWS: Strong Infrastructure, Familiar Controls, Same Old IAM Risk
AWS has taken the clearest infrastructure-first path. Amazon Bedrock AgentCore is designed to let teams build, deploy, and operate agents using any framework and foundation model, while leaning on familiar AWS controls for permissions, governance, and monitoring. For teams already deep in IAM best practices, STS, CloudWatch, and policy engineering, that makes AWS feel operationally familiar rather than exotic.
The catch is also very AWS: We are still responsible for getting IAM right.
AWS documentation explicitly says the managed BedrockAgentCoreFullAccess policy grants broad permissions and recommends creating custom policies restricted to the application’s actual needs. In other words, the platform gives us the parts for least privilege, but it does not spare our the work. If our agent can assume a role, call an API, or pivot into a data source, our permission model still needs the same discipline as any other workload or machine identity. The robot doesn’t become safe just because it has a nice console.
Azure: The Enterprise Workflow Play
Microsoft’s play is more enterprise workflow-first than infrastructure-first. Foundry Agent Service is a fully managed platform that supports no-code prompt agents as well as code-based hosted agents, and it’s tightly connected to the systems where large organizations already live: SharePoint, Microsoft Fabric, Azure AI Search, and more than 1,400 action connectors through Azure Logic Apps. That makes Azure especially attractive when the real challenge isn’t model access but plugging agents into business processes without building a strange little spider web of bespoke glue code.
More importantly for security teams, Microsoft is now treating agent identity as a first-class concern. Microsoft Entra Agent ID is designed to give agents distinct identities and extend existing governance controls such as lifecycle management, adaptive access policies, risk detection, and logging to those agents. There are even sponsor-governance workflows emerging for agent identities. That is not the full answer to agent risk, not by a long way, but it is a sign Microsoft appreciates the problem is identity and governance, not just process orchestration security.
GCP: Open Standards, Strong Data Gravity, More Moving Parts
Google’s story is different. GCP is leaning hardest into openness and interoperability. Its Agent Development Kit is open source and model-agnostic, Vertex AI Agent Engine provides the managed runtime, and Google has pushed A2A and MCP support as part of a broader standards-based agent ecosystem. For organizations worried about framework lock-in or building multi-agent systems that need to cooperate across tools and environments, that’s understandably appealing.
Google also has a very practical advantage for multi-cloud AI programs that depend on large datasets: BigQuery and the wider Vertex stack keep data-adjacent agents close to the place where the useful context already sits. That can reduce latency, architecture sprawl, and a fair bit of plumbing-induced migraines. But GCP still leaves teams with the classic service-account security problem: lifecycle, role design, role revocation, and safe federation across environments. Google’s own guidance emphasises service account lifecycle management, access review, role revocation/deprovisioning, and workload identity federation to avoid long-lived keys. Which is sensible. It is also work.
Observability is Better, but Accountability Isn’t
To be fair, all three clouds now offer fairly decent observability. AWS exposes agent telemetry through CloudWatch, Vertex AI Agent Engine integrates with Cloud Monitoring, and Microsoft Foundry provides dashboards for operational metrics, token usage, latency, and evaluation results. It’s no longer accurate to say the providers give us no visibility. They do.
But observability is not the same thing as governance-grade accountability.
A trace can tell us an agent called three tools, hit two APIs, and spent a fortune in tokens before our first coffee. It doesn’t automatically tell us whether the agent had the right access, whether that access was temporary, who approved it, whether it should have expired, or whether the same identity is now clandestinely sitting in five other systems with privileges nobody remembers granting. That is the ugly operational gap in multi-cloud AI, where nothing is visible all in one place and little is accountable.
And it is not a small gap. It’s a veritable chasm, where Deloitte reports that close to three-quarters of companies plan to deploy agentic AI within two years, yet only 21% say they have a mature model for agent governance. Cloud Security Alliance research says 43% rely on shared service accounts, 31% allow agents to operate under human identities, 74% say agents often receive more access than necessary, 79% say agents create access paths that are difficult to monitor, and 68% cannot clearly distinguish human from AI-agent activity.
That is not governance. That’s a call to action and vibes in a trench coat.
What Cloud Security Engineers Actually Need
The practical answer is not another dashboard. It is an access governance layer built for non-human identities and multi-cloud AI operations.
That means discovering agent identities and entitlements across AWS, Azure, GCP, and SaaS; granting only the minimum needed access; replacing standing privilege with just-in-time, task-bound access wherever possible; enforcing expiry and revocation by default; routing approvals through familiar workflows such as Slack or Teams; and keeping a clean, exportable (thus auditable) record of who approved what, when, and why.
That is how we make agent access both safer and way less annoying. It’s also how we survive AI auditing without having to reconstruct a murder mystery from CloudTrail, Entra logs, Rippling or Workday joiner/mover/leaver shenanigans, and someone’s half-finished Confluence page.
The providers are getting better at running agents. The real winners across multi-cloud AI will be the teams that learn to govern them like identities, and know for certain, at a moment's notice, what identity they’re using, and what they can touch.
If our agents now span AWS, Azure, GCP, and SaaS, they’ve already outgrown our access model. Install our free trial at 9 am today and by 9.30 am you can be mapping every entitlement across human and non-human identities in all environments, removing standing privilege, and replacing it with time-bound, policy-driven access that actually proves least privilege. Start a free trial with Trustle and bring your multi-cloud AI back under control, with one view across everything, without slowing it down.