What Europe’s new AI security baseline really means for our Cloud, our access model, and our sanity
If you’re responsible for security in a modern organization, congratulations: you’re now also responsible for AI agent security. Not in a vague “ethics and policy” sense, but in a very real, operational, prove-this-to-an-auditor way. Happy days.
Enter ETSI EN 304 223, Europe’s new baseline standard for securing AI models and systems. It’s thorough, sensible, and, let’s be honest, another document we didn’t ask for but absolutely have to deal with.
🔗 Securing Artificial Intelligence (SAI): Baseline Cyber Security Requirements for AI Models and Systems
We’ve gone through it line by line, so you don’t have to. Here’s what actually matters, where teams might struggle, and how modern cloud security practices can make this survivable.
What ETSI EN 304 223 Is (and why we can’t ignore it)
ETSI EN 304 223 defines baseline cybersecurity requirements for AI systems across their entire lifecycle. From design and development through deployment, maintenance, and retirement.
It’s not law on its own, but it’s clearly designed to support regulatory regimes like the EU AI Act, which already expects deployers of high-risk AI systems to retain logs and demonstrate ongoing risk management and oversight.
In other words, ETSI EN 304 223 is how regulators expect us to show our homework.
This Is an Identity and Access Standard in Disguise
Yes, ETSI EN 304 223 talks about data poisoning, prompt injection, and model integrity. But buried in the most important clauses is a quieter and more relevant message:
AI systems must only have the permissions they actually need, and we must be able to prove it.
The standard explicitly requires:
- A complete inventory of assets and interdependencies
- Least-privilege access for AI systems interacting with other services
- Audit trails for configuration changes, prompts, models, and operational access
- Continuous logging and monitoring to support investigations and compliance
That should sound very familiar to anyone who’s spent time untangling cloud IAM.
Why This Lands Squarely on Cloud Security Teams
AI systems don’t operate in isolation. In practice, they:
- Query data lakes
- Call internal APIs
- Trigger workflows
- Modify cloud resources
- Authenticate as service principals, roles, or service accounts
Which means your AI system is effectively a high-speed, non-human identity with fingers in everything.
And that’s a problem, because most organizations still manage cloud access like it’s 2015.
The Real Risk: Standing Privilege + AI Speed
IBM’s 2025 data breach research found that:
- 13% of organizations experienced breaches involving AI models or applications
- 97% of those lacked proper AI access controls
That’s not a model problem. That’s an access problem.
Pair that with Microsoft’s finding that most compromised cloud workloads are attacked within 48 hours of deployment
Now imagine an AI agent with standing admin permissions, connected to production systems, moving faster than any human review process.
ETSI EN 304 223 is trying to stop that.
What Compliance Looks Like in the Real World
Let’s translate ETSI requirements into operational reality.
1. We Need Continuous Visibility Into Effective Permissions
ETSI requires a “comprehensive inventory of assets and their interdependencies.” In cloud terms, that means:
- Knowing who can do what, where, and how, not just what was intended on paper
- Understanding transitive access (role chaining, group inheritance, trust relationships)
- Tracking human and non-human identities across AWS, Azure, and GCP
Static spreadsheets, a folder of screenshots, and annual access reviews don’t survive contact with modern cloud estates.
2. Least Privilege Has to Be Dynamic, Not Aspirational
ETSI is explicit: permissions must be granted only as required and risk-assessed.
In practice, this means:
- Removing standing privilege for both humans and workloads
- Using time-bound just-in-time access for sensitive operations
- Scoping permissions to specific resources and actions
- Automatically revoking access when it’s no longer needed
This aligns cleanly with Zero Trust principles and NIST SP 800-207’s guidance on per-request access decisions
3. Audit Trails Must Join Up Across AI and Cloud
ETSI requires audit logs for:
- Changes to system prompts and configurations
- Model and dataset lifecycle events
- System and user actions for investigations
Auditors won’t accept fragmented evidence. They’ll expect:
- Access requests and approvals
- Time-bound privilege elevation records
- Change history that correlates who approved access with what the AI system changed
If those live in separate tools with no shared timeline, compliance becomes a scavenger hunt.
Why Integrations Matter More Than New Tools
One of the subtler implications of ETSI EN 304 223 is that AI security is cross-platform by definition.
Controls must work across:
- AWS IAM roles and STS
- Azure Entra ID, RBAC, and PIM
- GCP service accounts and IAM conditions
- Identity providers like Okta
- Collaboration platforms and developer workflows where access decisions originate
Security teams don’t need another silo. They need policy-driven control and evidence that spans the cloud stack.
The Quiet Shift ETSI Is Forcing
ETSI EN 304 223 doesn’t say “buy new AI security software.” What it says, very clearly, is:
If we can’t control and prove access, we can’t secure AI.
That pushes organizations toward:
Or, put more bluntly: the same identity problems we’ve been tolerating for years are now compliance risks.
Trustle advisor Bruce Schneier once wrote, “Watching it all is vital for security.”
ETSI EN 304 223 formalizes that idea for AI. Not as a theory, but as an expectation.
AI didn’t break our access model. It just exposed how fragile it already was.