The Password Is the Vulnerability: How Identity-Based Attacks Became the Dominant Threat of Our Time

There is a persistent myth in cybersecurity that breaches happen because sophisticated attackers exploit obscure technical flaws, deploying novel malware in darkened server rooms while defenders scramble to patch unknown vulnerabilities. The reality documented in Palo Alto Networks' Unit 42 2026 Global Incident Response Report is far less cinematic and far more instructive. Attackers are not breaking in. They are logging in.

Based on analysis of more than 750 major incident response engagements across over 50 countries, spanning October 2024 through September 2025, Unit 42's findings expose an industry-wide failure at the most basic level of access control. Roughly 65% of initial access was driven by identity-based techniques — spanning credential misuse, MFA bypass, IAM misconfigurations, and social engineering — enabling unauthorized access, privilege escalation, and lateral movement. Phishing and vulnerability exploitation, meanwhile, have effectively tied as initial access vectors, each accounting for approximately 22% of intrusions. That parity is itself a striking signal: the conventional wisdom that patching is the primary line of defense has been overtaken by a problem that no patch can solve.

"Enterprise complexity has become the adversary's greatest advantage."
— Sam Rubin, SVP of Unit 42 Consulting & Threat Intelligence, Palo Alto Networks
Unit 42 2026 Global Incident Response Report
By the Numbers

65% of initial access was identity-based. Phishing and vulnerability exploitation tied at 22% each. The conventional wisdom on patching as the primary defense is no longer the whole picture — and not even close to the dominant one.

The Many Faces of Identity Abuse

Identity-based attack is not a single technique. It is a category encompassing a range of methods that all share one common feature: they allow an attacker to operate inside an environment as a trusted, authenticated entity rather than as an obvious intruder. Social engineering led the field, accounting for roughly one-third of all incidents Unit 42 responded to, with attackers bypassing security controls through compromised credentials, brute-force attacks, overly permissive identity policies, and insider threats.

Social engineering, for its part, has grown considerably more sophisticated. It is no longer synonymous with clumsy phishing emails full of grammatical errors and implausible urgent requests. More than one-third of social engineering incidents involved non-phishing techniques, including search engine optimization poisoning, fake system prompts, and help desk manipulation. These are attacks designed to exploit institutional processes rather than individual gullibility. When a threat actor calls an IT help desk, impersonates an employee with a plausible story, and successfully triggers a credential reset, they have not taken advantage of a specific person's carelessness. They have exploited a systemic gap in identity verification protocol that would likely succeed regardless of how security-aware the individual help desk technician happened to be.

Two distinct social engineering models are now operating at scale, and understanding the difference between them matters for defense. High-touch attacks target specific, identified individuals in real time — help desk manipulation, voice spoofing, and live impersonation that bypasses MFA without touching a single malicious file. At-scale deception operates differently: ClickFix-style campaigns, SEO poisoning, and fake browser prompts and CAPTCHA tests trick users into executing malware themselves across healthcare, retail, and government sectors simultaneously. One is a precision instrument. The other is a dragnet. Organizations need defenses capable of detecting both, and most currently have neither optimized.

Attacks often targeted privileged accounts in 66% of cases and involved impersonation of internal personnel in 45% of cases, with callback or voice-based techniques appearing in 23% of incidents. This points to a deliberate targeting strategy. Threat actors are not casting wide nets hoping to catch any user. They are conducting reconnaissance, identifying who holds administrative privileges, and then investing time and effort in constructing credible pretexts to compromise exactly those accounts. The return on investment for such targeted work is substantial, because a single privileged account can unlock entire systems rather than a single workstation.

The help desk attack vector deserves particular attention because it highlights how organizational complexity itself becomes a liability. Large enterprises run distributed IT support operations, often with contractors and outsourced staff working across time zones. Verification procedures that might work well in a small office become inconsistently applied at scale. Threat actors such as Muddled Libra bypass multi-factor authentication and exploit IT support processes to escalate privileges in minutes, often without malware. In one documented case, a threat actor moved from initial access to domain administrator in under 40 minutes using only built-in tools and social pretexts. That kind of timeline renders many conventional security controls irrelevant. The attacker was gone before an analyst had finished their morning coffee.

Nation-state actors have extended this playbook in a particularly unsettling direction. North Korean operators have developed what Unit 42 describes as synthetic insider campaigns: fabricating entire professional identities — complete with résumés, social media profiles, and references — to secure remote employment at target organizations. These are not phishing attacks. They are long-con infiltrations where the attacker is hired, onboarded, and given legitimate access, sometimes to developer environments or sensitive internal systems. The identity attack surface does not begin at the login prompt. For some organizations, it begins at the interview.

When Credentials Are Enough

The rise of identity-based attacks reflects a calculated economic logic. Compromising credentials, whether through phishing, credential stuffing from previously leaked databases, purchasing access on dark web markets, or simply guessing poorly configured accounts, is cheap and scalable. Attackers aren't breaking in; they're logging in with stolen credentials and tokens, and then exploiting fragmented identity estates to escalate privileges and move laterally without triggering traditional defenses.

This matters because it means that many traditional security tools are effectively blind to the initial stages of an identity-based attack. Endpoint detection and response solutions look for malicious processes and unusual file behavior. Network monitoring tools flag anomalous traffic patterns. Neither category is well-positioned to distinguish a threat actor using a legitimately obtained username and password from the employee who owns that account. Once authenticated, the attacker blends into the noise of normal enterprise activity, particularly in large organizations where many users access many systems throughout the day.

The browser has become a primary battleground in this environment, appearing in nearly 48% of Unit 42 incidents. This statistic deserves more attention than it typically receives in security discussions, because it reframes the threat model entirely. The browser is not a peripheral tool. It is now the primary interface through which employees access enterprise applications, conduct authentication flows, and interact with SaaS environments. When an attacker can compromise a browser session — through credential harvesting, session token theft, or adversary-in-the-middle proxy attacks — they inherit authenticated access to everything that browser was connected to. This is not a theoretical risk. It is a documented attack path appearing in nearly half of all major incidents.

Social engineering persists due to overpermissioned access, gaps in behavioral visibility, and unverified user trust in human processes. This observation points to the structural nature of the problem. Organizations routinely grant users more access than their roles require because restricting access creates friction and generates support tickets. Identity governance processes are often reactive rather than proactive, with access reviews conducted infrequently and with limited rigor. The accumulated result is an identity estate riddled with excessive permissions, dormant accounts, unmanaged service accounts, and OAuth tokens authorized years ago and long since forgotten. Each one represents a potential entry point or pivot point for a threat actor who knows where to look. A previous Unit 42 cloud IAM study analyzing more than 680,000 identities found 99% carried excessive permissions — a finding that contextualizes just how normalized over-permissioning has become.

Multi-Surface Attacks

In Unit 42's 2025 investigations, 87% of intrusions crossed two or more attack surfaces, and 67% crossed three or more, with identity implicated in nearly 90% of incidents. Nearly 48% involved browser-based activity. A single compromised SaaS credential can pivot to cloud environments, internal applications, and partner systems through federated identity and OAuth trust relationships.

In Unit 42's 2025 incident response investigations, 87% of intrusions crossed two or more attack surfaces — and 67% crossed three or more — with identity implicated in nearly 90% of incidents. The multi-surface nature of modern attacks is directly enabled by identity abuse. An attacker who compromises a single SaaS credential can often pivot to cloud environments, internal applications, and partner systems through federated identity and OAuth trust relationships. The attack surface is no longer defined by network boundaries. It is defined by identity trust relationships, and those relationships now span the entire enterprise ecosystem and beyond.

The Non-Human Identity Problem

There is a dimension of the identity crisis that rarely appears in mainstream security coverage, and it is arguably the one growing fastest: the explosion of non-human identities. Service accounts, automation roles, API keys, OAuth tokens, and the emerging category of AI agent identities now outnumber human users in many enterprise environments — globally, machine identities outnumber human ones by a ratio of 82 to 1, according to CyberArk's 2025 Identity Security Landscape research conducted across 2,600 security decision-makers worldwide. These identities are frequently over-privileged, rely on long-lived credentials, and are inconsistently monitored or governed.

For an attacker, compromising a service account is often higher leverage and considerably quieter than compromising a person. A service account does not take vacations, does not generate anomalous login times, and does not trigger behavioral alerts the way a human account might when it starts accessing unusual systems at 2 a.m. Service accounts operate continuously and their activity blends naturally into the noise of automated processes. When an attacker pivots through a service account, they are wearing a costume that the security tooling was designed to ignore.

The Unit 42 2026 report explicitly calls out the governance gap around what it terms shadow identities: unsanctioned accounts, developer environments, and third-party connectors created outside standard onboarding processes that bypass standard review and logging entirely. These are not rogue accounts in the traditional sense. They are the predictable byproduct of organizations moving fast through cloud adoption, SaaS proliferation, and AI integration without building governance infrastructure that can keep pace. The shadow identity problem is not a hygiene failure by individuals. It is an organizational design failure.

The Agentic AI Wildcard

As organizations deploy AI agents that autonomously access internal systems, execute multi-step tasks, and connect to sensitive data, each agent represents a new identity with privileges that may far exceed what any human user holds. Governing agentic AI access using the same discipline applied to human and machine identities is not a future problem. It is a present one.

Agentic AI adds a further dimension that the security industry is only beginning to grapple with seriously. AI agents — systems capable of autonomously executing multi-step workflows with access to internal tools, data, and APIs — represent a new category of identity that carries privileges that may exceed what most human users are granted. When an attacker compromises an AI agent, or manipulates it through prompt injection or supply chain tampering, they inherit not just a credential but an autonomous executor with wide-ranging system access. The 2026 report explicitly identifies centralizing the management of human, machine, and agentic identities as a foundational defensive requirement, placing AI agents in the same governance framework as service accounts and human users. Few organizations have operationalized this yet.

The SaaS Supply Chain: Trusted and Weaponized

The third-party SaaS supply chain has become one of the most consequential identity attack surfaces in the enterprise, and one of the least governed. Attacks involving third-party SaaS applications surged 3.8 times since 2022 and accounted for 23% of all attacks in Unit 42's 2025 dataset, as threat actors abuse OAuth tokens and API keys for lateral movement. The mechanism exploited is not a vulnerability in the traditional sense. It is trust itself.

When an organization authorizes a vendor tool, a SaaS integration, or an open-source dependency, it extends implicit trust to that external party and everything connected to it. Attackers who compromise a vendor's management plane — or who abuse a legitimate vendor tool with legitimate credentials — can reach into downstream customer environments without triggering any perimeter alert, because they are entering through a trusted, authenticated channel. By abusing trusted integrations, vendor tools, and application dependencies, they bypass traditional perimeters and can expand impact well beyond a single system.

The defensive challenge here is not primarily technical. It is organizational. Many enterprises lack a complete inventory of their SaaS connections, vendor agents, and transitive library dependencies. They cannot quickly answer which systems a given integration can access, what permissions it holds, or what it has been doing recently. When an incident occurs involving a third-party integration, the forensic picture takes longer to assemble precisely because activity arrives through trusted channels and logs often look entirely legitimate. Unit 42 recommends predefining "break-glass" severing plans — documented procedures for revoking tokens, disabling connectors, and isolating vendor agents that can be executed without improvisation during an active incident. The organizations that have such plans move far faster when third-party compromise hits.

AI Is Closing the Window

The identity problem would be serious enough on its own. Combined with AI-accelerated attack operations, it becomes genuinely alarming. The fastest 25% of intrusions reached data exfiltration in just 72 minutes in 2025, down from 285 minutes the previous year. That is a fourfold acceleration in attack speed in a single year. The practical implication is that the window between initial access and significant damage has collapsed to the point where human-speed detection and response processes are no longer adequate for the fastest attacks.

AI is enabling this acceleration across multiple phases of the attack lifecycle. Automation tools accelerate intrusion steps, generative AI creates human-like content for personalized lures, voice cloning, and adaptive interactions, while agentic AI autonomously executes multi-step tasks including cross-platform reconnaissance and creating synthetic identities for targeted campaigns. The combination is particularly dangerous in the context of identity attacks. AI can generate highly convincing personalized phishing content at scale, conduct reconnaissance to identify high-value targets and their organizational relationships, clone voices for vishing attacks, and automate the lateral movement steps that follow initial access. What once required a skilled attacker investing hours of preparation can now be partially or fully automated.

Attackers now begin scanning for newly discovered vulnerabilities within 15 minutes of public disclosure. While that statistic relates to vulnerability exploitation rather than identity attacks directly, it illustrates the broader pattern: AI is compressing every phase of attacker operations, including the reconnaissance and targeting work that precedes identity-based intrusions.

The Financial Arithmetic of Extortion

The surge in identity-based attacks does not exist in a vacuum. It is being driven, sustained, and amplified by the financial returns that successful intrusions generate. Financially motivated attacks accounted for the majority of the 750 incidents Unit 42 responded to last year. Median initial extortion demands rose from $1.25 million in 2024 to $1.5 million in 2025. Median payments also increased substantially year over year, even as threat actors calibrated demands more carefully against victims' perceived annual revenue — dropping that ratio from 2% of perceived annual revenue (cited in the prior year's report) to 0.55% in 2025. The declining percentage of revenue demanded is not a sign of restraint. It is a sign of strategic refinement.

In 2025 cases where negotiations occurred, the median reduction between initial demand and final payment grew from 53% to 61% — meaning experienced negotiators can still drive significant reductions, and the existence of that negotiability is itself baked into the attackers' pricing model. Ransomware groups now operate with defined roles, affiliate programs, and repeatable negotiation playbooks. Some maintain what Unit 42 characterizes as brand reputations through dark web communications. In 2025, threat actors fulfilled their stated commitments — providing decryption keys or claiming to delete stolen data — in 68% of cases where they made a promise. This reliability is not ethical behavior. It is customer retention strategy for a criminal business model.

The extortion model itself is evolving in ways that undermine common defensive assumptions. Encryption appeared in 78% of extortion cases in 2025, a sharp decline from the near-or-above-90% levels seen consistently from 2021 through 2024. This represents the most pronounced year-over-year shift in Unit 42's ransomware dataset. Ransomware response planning has traditionally centered on backup and recovery capabilities. Organizations have invested heavily in immutable backup systems, offline copies, and rapid restoration procedures, operating under the assumption that if they can restore their systems without paying, they have neutralized the threat. That assumption is now structurally incomplete.

When an attacker has already exfiltrated sensitive data and is threatening to publish it on a leak site, a clean backup is irrelevant to that leverage. The victim faces reputational damage, regulatory exposure, and the risk of sensitive customer or partner data being made public regardless of whether they can restore their systems. Identity-based attacks are particularly well-suited to this data-theft-first model, because an authenticated attacker can quietly exfiltrate data over days or weeks before triggering any obvious disruption, maximizing the volume of stolen data before detection becomes likely. Unit 42's data shows the median time to exfiltration across all 2025 investigations was two days — not 72 minutes, which represents only the fastest quartile of attacks. Organizations focused solely on the worst-case speed scenario may be underpreparing for the slower, more methodical intrusions that quietly drain data across longer periods.

The Systemic Nature of the Failure

What makes the Unit 42 findings particularly sobering is the report's assessment of why these attacks succeed. In over 90% of the incidents investigated, misconfigurations or gaps in security coverage materially enabled the attack — not advanced tradecraft. Over 90% of breaches were preventable in the sense that the exposure gaps attackers exploited were gaps the defending organization could have closed.

This is not primarily a story about sophisticated nation-state actors deploying zero-day exploits that no defender could reasonably anticipate. It is a story about basic hygiene failures, misconfigured identity systems, excessive permissions, weak authentication controls, poor visibility into identity behavior, and inadequate account recovery verification that collectively create an environment where attackers can operate with relative ease. The sophistication is in the targeting and the speed, not in overcoming cutting-edge defenses.

One of the most counterintuitive contributors to this problem is tool sprawl. Many organizations are running 50 or more security products simultaneously, creating a paradox where more security investment produces less security visibility. When detection and response data is fragmented across dozens of disconnected platforms, the signals were often present in the logs — the forensic evidence is there after the fact — but during the attack, teams had to stitch together data from multiple sources, slowing response during the most critical early window. More tools, configured inconsistently across a complex environment, can produce more noise and less clarity than fewer, well-integrated ones. This is an architectural problem that budget alone cannot solve.

Architectural Problem

When 87% of incidents span multiple attack surfaces and nearly 90% implicate identity weaknesses, this is long past being an endpoint problem or an identity problem in isolation. The problem is architectural. Many organizations are running 50 or more security products — and achieving less visibility, not more.

Organizations have built complex, interconnected environments of cloud services, SaaS applications, identity providers, API integrations, and third-party tools, and they have done so faster than they have built the visibility and governance capabilities needed to secure them. Identity has become the connective tissue of the modern enterprise, but identity security has not kept pace with the expansion of identity's role.

What Effective Defense Actually Requires

The prescriptions that follow from Unit 42's findings are more demanding than the conventional security checklist might suggest. Multi-factor authentication is necessary but not sufficient, as MFA bypass through SIM swapping, adversary-in-the-middle proxies, and social engineering of MFA recovery processes has become routine. Security leaders must shift beyond user awareness, recognizing social engineering as a systemic threat requiring behavioral analytics and identity threat detection and response to proactively detect credential misuse, secured identity recovery processes with conditional access enforcement, and zero trust principles extended to encompass users, not just network perimeters.

The voice channel deserves explicit inclusion in that threat model — a gap that many identity security programs still leave open. Voice-based callback techniques appeared in 23% of social engineering incidents, and voice-based attacks have proven highly effective precisely because most organizations have no technical controls on their voice infrastructure and no telemetry from phone systems feeding into their incident response platforms. An attacker who bypasses email filtering by simply calling the help desk is exploiting a defense gap that no amount of endpoint investment addresses. Telephone-channel threat intelligence, voice traffic monitoring, and scripted verification protocols for inbound IT support calls are not exotic capabilities. They are table stakes that few organizations have operationalized.

Identity threat detection and response represents a genuine maturation of security operations, moving from perimeter-and-endpoint models toward continuous monitoring of identity behavior across all systems. The goal is to detect anomalous authentication patterns, unusual access requests, and privilege escalation attempts in near-real time, and to respond fast enough to contain an intrusion before it crosses the 72-minute threshold that separates a stopped attack from a data breach. But the architecture question matters as much as the tooling question: ITDR that operates as a standalone silo produces the same fragmentation problem that 50 disconnected security products do. Identity telemetry needs to feed into a consolidated SOC view, not a separate dashboard that no one checks.

Least privilege enforcement, long recommended and long neglected, becomes genuinely urgent when 65% of intrusions ride on identity abuse. The implementation challenge at enterprise scale is real, and treating it as simply a configuration task misunderstands the organizational dynamics at play. Privilege creep is not an accident. It is the predictable result of years of access requests approved under time pressure, roles that expanded as job responsibilities shifted, and service accounts created for temporary projects that became permanent infrastructure. Reversing it requires a standing program — continuous access review rather than periodic audits — combined with automated detection of privilege drift and an executive-level commitment to accepting the friction that comes with actually enforcing least privilege. The principle is simple; the institutionalization of it is hard and requires sustained organizational will that most security teams have historically struggled to maintain against competing priorities.

For machine and agentic identities specifically, the defensive approach requires rethinking governance tools built for human users. Human-centric identity governance assumes seasonal access reviews, manager approvals, and HR-driven lifecycle events. Service accounts and AI agents do not have managers. They do not have annual reviews. They do not get offboarded when a project ends unless someone explicitly builds that process. Effective governance for non-human identities means treating them with the same rigor as privileged human accounts: inventorying them completely, auditing their effective permissions continuously, setting hard expiration dates on credentials, and alerting on any deviations from their expected behavioral baseline. The organizations doing this well are treating their service account estate as a privileged access management problem, not a configuration management problem.

The SaaS supply chain requires a separate playbook. Defending against third-party SaaS abuse means maintaining a living inventory of every authorized integration and its effective permissions — not a point-in-time snapshot but a continuously updated registry that flags new OAuth grants, permission scope changes, and unusual activity patterns in real time. It also means designing for severability: the ability to revoke, isolate, or cut off a vendor integration on short notice, without cascading operational failures. Organizations that discover during an active incident that they cannot safely disable a vendor connector without taking down critical business processes are in a position where the attacker effectively holds a hostage. Designing around that dependency before an incident is the only way to negotiate from strength.

The identity crisis documented in Unit 42's report is not a prediction of future risk. It is a description of current reality. Two-thirds of organizations that suffered a serious enough intrusion to call in professional incident responders got there because someone logged in with credentials they should not have possessed, or because an identity system was misconfigured in ways that expanded access far beyond its intended scope. The perimeter has not moved. It has dissolved. And in its place, identity now defines the boundary between everything an organization holds and everyone who should be kept out.

Key Takeaways

  1. Identity is the new perimeter: 65% of initial access in major incidents was identity-based. Phishing and vulnerability exploitation tied at 22% each. Security budgets and strategies must reflect this distribution, not the old assumptions.
  2. Social engineering has gone systemic — and split into two distinct threats: High-touch precision attacks (help desk manipulation, voice impersonation) and at-scale deception campaigns (ClickFix, SEO poisoning) require fundamentally different defenses. User awareness training addresses neither adequately.
  3. Speed has outpaced human response: The fastest quartile of intrusions reached data exfiltration in 72 minutes in 2025, down from 285 minutes the prior year. The median time was two days — meaning slower, stealthier exfiltration campaigns are just as dangerous and far less likely to be caught in time.
  4. Backups don't neutralize extortion anymore: Encryption appeared in 78% of extortion cases in 2025, down sharply from above 90% in prior years. Data theft as standalone leverage means recovery capability alone cannot address the threat of exposure.
  5. Machine and agentic identities are ungoverned attack surface: Service accounts, API keys, and AI agents frequently outnumber human users, carry excessive privileges, and have no systematic lifecycle management. This is the identity governance gap that attackers will increasingly exploit.
  6. SaaS supply chain abuse has surged 3.8x since 2022: Third-party integrations are trusted attack vectors. Organizations that cannot quickly answer what their vendor integrations can access, and cannot rapidly sever them, have a structural liability in their incident response capability.
  7. Over 90% of breaches were preventable: The failures are architectural, not adversarial. Misconfigured identity systems, excessive permissions, tool sprawl, and weak verification processes are the root causes, not advanced tradecraft.

This article is based on findings from the Palo Alto Networks Unit 42 2026 Global Incident Response Report, published February 17, 2026, drawing on more than 750 major incident response engagements conducted between October 2024 and September 2025 across more than 50 countries. Additional context draws on the Unit 42 2025 Global Incident Response Report: Social Engineering Edition (August 2025), the Muddled Libra threat assessment (August 2025), and CyberArk's 2025 Identity Security Landscape research. The Sam Rubin quote appears in the official Palo Alto Networks press release for the 2026 report.

Back to all articles