When “who are you?” becomes the only security question that matters
For years, security teams have told a comforting story: attacks start with application exploits or malware. Alas, that story is now outdated. Today’s most effective attacks don’t break anything at all. They log in.
What’s changed isn’t just phishing volume or credential theft; it’s capability. Adversarial use of AI has turned identity abuse into a high-throughput, low-friction operation. Attackers don’t just steal credentials anymore; they impersonate identities convincingly enough to be granted access on purpose. And in cloud environments, identity is the control plane.
It feels like we're entering a time that, on reflection, we'll call "The era of AI identity attacks".
Identity Was Already the Weakest Link. AI Just Industrialized It
Cloud security quietly moved the blast radius from hosts to permissions years ago. APIs replaced servers. Roles replaced firewalls. Identity became the gatekeeper.
AI simply noticed.
Microsoft reports blocking over 7,000 password attacks per second, and identity-based attacks now account for the overwhelming majority of intrusion attempts across cloud and SaaS environments.
Meanwhile, Verizon’s 2025 Data Breach Investigations Report shows that:
- 46% of compromised systems with corporate logins appeared on unmanaged devices via infostealers.
- 54% of ransomware victims had corporate credentials circulating via dark web services and criminal marketplaces.
AI doesn’t invent these weaknesses. It connects them, faster, more convincingly, and at scale.
What Is an AI Identity Attack, Really?
An AI identity attack isn’t a new exploit. It’s an acceleration layer over identity compromise.
AI models are now used to:
- Craft hyper-personalized spearphishing that mirrors internal tone and workflow.
- Run long-form conversational phishing (chat, email, voice) without human fatigue.
- Generate deepfake audio or video to bypass helpdesks and approval processes.
- Analyze cloud permission models to identify viable privilege escalation paths.
- Operate compromised identities programmatically, not manually.
Gartner reports that 62% of organizations experienced a deepfake-related attack in 2025, many of them exploiting automated or semi-automated identity processes rather than human judgement alone.
That last part matters. These attacks don’t rely on tricking one person. They rely on systems that assume identity requests are genuine.
From Synthetic Humans to Synthetic Cloud Identities
The obvious examples get the headlines: fake CEOs, voice-cloned finance directors, video calls that look just real enough. But the quieter shift is happening in cloud-native identity.
Once attackers obtain any legitimate foothold (OAuth consent, a service account token, a CI/CD credential), AI accelerates the next phase:
- In AWS: abusing AssumeRole, iam:PassRole, or federated identity trust policies
- In Azure: over-privileged service principals, mis-scoped Graph permissions, stale PIM roles
- In GCP: service account key creation, workload identity federation, project-level IAM sprawl
None of these require zero-days. They require permissions that already exist.
Mandiant’s M-Trends 2025 confirms stolen credentials as one of the fastest-growing initial access vectors, outpacing many traditional malware delivery techniques
AI doesn’t guess which permission works. It reasons about it.
The AI Identity Attack Chain (In Practice)
Most cloud identity incidents now follow a familiar pattern:
- Recon: org charts, tooling, cloud providers, vendors
- Initial access: phishing, infostealers, OAuth abuse, helpdesk resets
- Session theft: cookies, refresh tokens, API keys
- Control-plane entry: cloud APIs, not hosts
- Privilege pathing: chaining permissions to effective admin
- Persistence: new roles, apps, federated identities
- Impact: data theft, extortion, supply-chain compromise
AI compresses the time between steps. What used to take weeks now happens in hours.
Why Traditional Defenses Fall Over
Training helps, but it doesn’t expire stolen access. MFA helps, but not against token theft or OAuth abuse. Detection helps, but identity misuse often looks authorized.
Trustle advisor Bruce Schneier put it pretty succinctly: AI increases both the quality and quantity of phishing, pushing attacks further into the realm of believable human interaction rather than technical exploitation.
Once an attacker is legitimately authenticated, most security tooling politely steps aside.
The Only Sustainable Response: Shrink Identity Power
We can’t out-train adversarial models. We can, however, out-design them.
The defensive shift is architectural:
1. Continuous Entitlement Visibility
We need a live, multi-cloud map of who and what can do what, not last quarter’s spreadsheet. Identity risk lives in combinations, not single roles.
Standing access is a gift to AI attackers. Temporary, purpose-bound privileges dramatically reduce the value of stolen identities.
AI loves stale access. Orphaned accounts, forgotten roles, and unused permissions are free persistence.
4. Audit-Ready Evidence by Default
When identity is the perimeter, evidence is your incident response timeline. Access requests, approvals, and expiry must be provable without archaeology.
This isn’t about slowing developers down. It’s about making identity boring, temporary, and tightly scoped, even when requests look perfectly reasonable.
Bottom Line
The next major cloud breach probably won’t start with malware. It will start with a message that sounds right. A request that looks routine. An identity that technically has permission.
AI identity attacks succeed because cloud environments were built for speed and trust. The fix isn’t fear. It’s precision. And in a world where adversarial models can convincingly ask for access, the most powerful security question becomes simple again:
Why does this identity need it, right now?