It starts like every other compliance audit. The external auditors arrive, coffee and pleasantries, they ask for our access logs, and our team confidently pulls the SIEM dashboard. We have alerts. We have dashboards. We have 90 days of event data neatly visualized in a palette that cost more than our first car.
Then they ask a different question.
"Can you show us every account with access to your production databases, what permissions they hold, and when each one was last used?"
Silence. Tumble weed. Blank expressions. Suddenly our team’s furtively querying four different cloud consoles, two spreadsheets that were last updated in Q3, and a Confluence page that predates our last two head-of-engineering hires.
This is the compliance audit that exposes what our SIEM was never actually designed to catch. And in 2026, it’s happening at scale, because the attack surface grew legs, wandered into multi-cloud infrastructure, and started spawning AI agents while everyone was diligently watching the logs.
Our SIEM is Doing Its Job. And That's the Problem.
The SIEM vendors aren't wrong, they're just solving a different (and in an age of progressive and evolving international cybersecurity standards) very modern problem.
A SIEM tells us what happened. It ingests events, correlates logs, fires alerts. It’s a retrospective instrument, and a powerful one. But it has a fundamental architectural blind spot: it can’t tell us who has access right now, and it almost certainly doesn't know about identities that never generate events, because those actions haven't been used yet.
The logs are full of things that did happen. The compliance audit finds the things that could happen, because the access was never removed.
"Attackers spent three weeks moving through identity systems and cloud applications before touching a single endpoint. The first SIEM alert came when the damage was already done." [Todyl]
Traditional SIEMs also rely on consistent, structured logging, but attackers are increasingly using living-off-the-land techniques: legitimate credentials, legitimate tools, legitimate access. No anomaly to detect. No alert to fire. Impeccable logs. The identity-based attack chain doesn't announce itself. It just uses the door that was left open.
What the Auditors Actually Found
The scenario is fictional, but the stats that support every element of it are very real.
Finding 1: 15,000 ghost accounts
The auditors request a full export of all enabled user accounts across AWS, GCP, Azure, GitHub, and your primary SaaS stack. What comes back is a spreadsheet nobody wanted to see.
An average enterprise has approximately 15,000 inactive "ghost" account , be they human or non-human identities, still sitting in an enabled state [Varonis]. Accounts belonging to contractors who finished engagements two years ago. Engineers who moved teams and got re-provisioned rather than migrated. Test accounts created during an integration sprint that never got cleaned up. Junk inherited from minor mergers and acquisitions.
None of these accounts generated meaningful log volume. They'd been sitting quietly in our identity store, maintaining standing access to systems they were provisioned for during a role that no longer exists.
Our SIEM? It didn't flag a single one. They weren't doing anything wrong. They were just... there.
Finding 2: 85% of serious incidents trace to service accounts
The auditors pivot to non-human identities: service accounts, CI/CD pipeline tokens, API keys. Here's where our audit gets problematic.
ReliaQuest's incident response data from the first half of 2024 found that 85% of the breaches they responded to involved compromised service accounts, up from 71% in the same period of 2023. Service account exploitation is accelerating, and it's not hard to understand why.
- Service accounts are often provisioned once and never reviewed.
- They frequently hold far broader permissions than any specific workflow requires.
- Their credentials, API keys, tokens, certificates, rotate infrequently or never.
- They don't follow joiner-mover-leaver workflows because they're not people.
In the Microsoft Midnight Blizzard attack, the threat actor exploited a legacy OAuth application, an unmanaged non-human identity, with full access to Microsoft's production environment. They then used it to spawn additional OAuth applications and grant them elevated permissions. The initial access vector? A test account with no MFA that had never been deprovisioned.
Our SIEM logged that OAuth token activity. It just didn't know to care about it.
Finding 3: The AI agent access inventory (or lack thereof)
This is the finding that's going to define compliance audits for the next five years.
Rubrik Zero Labs' 2025 research found that AI agents in the enterprise now outnumber human users by 82 to 1. 96% of enterprises recognise AI agents as an identity risk, but fewer than half have any governance controls in place for them.
This is the identity security debt that hasn't yet hit the books. AI agents are being deployed by engineering teams, provisioned with API keys and service account credentials, granted access to production systems so they don't fail, and then left running with persistent, often over-permissioned access that nobody owns.
"Agents get god mode so they don't fail, and that privilege becomes the default operating state. Hardcoded tokens don't just live forever, they become shared infrastructure across agents, pipelines, and environments." [Hacker News]
The OAuth and SAML frameworks underpinning our IAM stack were designed for static human and machine identities. They have no native concept of ephemeral, task-scoped, AI-driven access that should exist for the duration of a workflow and nothing more.
The auditors want to know: which of your AI agents has read access to your customer data store? What did they actually use? Which ones are still active from the pilot project that was deprioritized six months ago?
Our SIEM has logs from the agents that generated events. It has nothing on the agents that didn't, but still have the keys.
The Detection Gap is Measured in Months
IBM's breach research puts the average time to detect a credential-based breach at 292 days. Gartner analysis shows that 45% of breaches are discovered by external parties, not internal teams, meaning, in nearly half of cases, we’re not the one who finds out first.
The compliance audit is functioning, in this scenario, as the external party. And it found everything the SIEM didn't. Not because the auditors have better tooling, but because they asked the right structural question: not "what happened?" but "who has access?"
What "Seeing Access" Actually Requires
The architecture gap here is not a SIEM configuration problem. It is a category problem. We need a different type of tool for a different type of question.
Cloud Infrastructure Entitlement Management (CIEM) platforms address the access visibility layer that SIEMs don't: they map every identity, human and non-human, to every permission, across every cloud environment, and track how that state changes over time [Trustle]. Not event by event. State by state.
Specifically, the capabilities that would have pre-empted every finding in the audit above:
- Continuous entitlement inventory: every account, service account, and AI agent identity visible in a single plane, across AWS, GCP, Azure, Microsoft 365, GitHub, and SaaS, with real-time permissioning data, not quarterly spreadsheet exports.
- Orphaned account detection: automated flagging of accounts that have been inactive beyond defined thresholds, with workflow-driven remediation rather than manual Jira tickets.
- Zero Standing Privileges (ZSP): just-in-time access that provisions on request and expires automatically, through Slack or Teams, without a helpdesk ticket, eliminating the surface area that standing access creates.
- Non-human identity lifecycle management: service accounts and AI agent credentials with the same governance controls as human accounts, scoped access, rotation policies, usage tracking, and automated deprovisioning when a workflow ends.
- Time-series identity risk tracking: not a point-in-time snapshot, but a longitudinal view of how privilege is accumulating, drifting, and expanding across our environment. The access creep that SIEMs are structurally blind to.
This is also the architecture that regulators are converging on. NIST SP 800-207, CMMC, SOC 2 Type II, ISO 27001, and PCI DSS 4.0 all increasingly ask the same question the auditors asked: can you demonstrate continuous least-privilege access (LPA) across our environment? Not just at audit time. Continuously.
For the AI Access Problem Specifically
The NHI governance challenge deserves its own callout, because it is moving faster than most security teams are ready for.
Gartner predicts that by 2028, at least 15% of daily enterprise decisions will be made autonomously by agentic AI. The World Economic Forum has flagged that agentic AI is already spawning non-human identities in security blind spots, receiving broad, persistent access to sensitive systems without the safeguards that would apply to a human identity requesting the same access.
The right architecture for AI agent access is: task-scoped, time-bound, least-privilege, and fully audited, with the agent identity's access lifecycle managed the same way we'd manage a contractor. Onboarding for the task. Off-boarding when it's done. Full entitlement history available for the compliance audit that will, at some point, ask.
That audit will come. The question is whether the answer is "here's our entitlement inventory" or another uncomfortable silence.
An Audit Doesn't Have to Be a Reckoning
The compliance audit in this post found what it found because the organization was relying on event-based security tooling to answer a state-based question. SIEMs are not the wrong tool. They are a tool solving the wrong problem for identity visibility.
CISOs and cloud architects who are preparing for their next SOC 2, ISO 27001, or CMMC review need an access inventory that is continuous, not quarterly. In 2026, auditors are absolutely going to ask about an AI audit. They need an entitlement model that reflects real usage, not provisioning history. And they need an NHI (non-human identity) governance layer that treats AI agent credentials with the same rigor as human access.
The good news: this is a solvable architecture problem. The access visibility layer exists. The just-in-time workflows exist. The NHI lifecycle management tooling exists.
The question for our next compliance audit is whether we'll have answers ready.
Now’s the time to get a grip on continuous entitlement inventory across multi-cloud environments. Start our Trustle free trial today, and we’ll make the audit problem go away in as little as 30 minutes.



