Kasey Cromer, Netlok | March 18, 2026
Executive Summary
The identity and authentication methods that enterprises rely on today were not designed for AI powered attackers. Deepfakes now defeat facial recognition at scale. Voice clones bypass call center verification in seconds. AI generated phishing harvests credentials faster than security teams can respond. What worked five years ago is now a liability.
This is not a future threat. It is happening now in 2026. As Gartner anticipated, 30 percent of enterprises now consider face biometrics unreliable in isolation due to AI generated deepfakes. Deepfake fraud attempts have surged over 3,000 percent since 2022. Voice cloning attacks increased 680 percent in the past year alone. The authentication crisis is here.
Security leaders and boards need to understand that legacy identity and authentication has become a material enterprise risk. Photolok Passwordless IdP and authentication offers an alternative designed for this threat landscape, replacing passwords and biometrics with photo based identity and authentication that gives AI attackers nothing to clone, nothing to phish, and nothing to replay.
The Authentication Crisis: What AI Has Changed
AI has fundamentally broken the assumptions behind traditional identity and authentication. The methods enterprises have relied on for decades, passwords, one time codes, facial recognition, and voice verification, all assume that attackers are human and that fakes are easy to spot. Neither assumption holds in 2026.
Deepfakes are defeating facial recognition. Attacks using face swap deepfakes to bypass biometric authentication have increased over 700 percent in recent years, and the problem continues to accelerate. The volume of deepfakes shared online has grown 16 fold in just two years, reaching an estimated 8 million in 2025 (Fortune). In Q1 2025 alone, financial losses from deepfake enabled fraud exceeded $200 million in North America. The Arup incident, where a finance worker was tricked into wiring $25 million after a video call with deepfake executives, demonstrated that attackers can now fabricate entire multi person video conferences. When shown high quality deepfake videos, humans correctly identify them as fake only 24.5 percent of the time.
Voice clones are bypassing verification. Voice cloning now requires just three seconds of audio to produce a convincing replica, complete with natural intonation, emotion, and breathing patterns. Voice deepfakes rose 680 percent in the past year (Pindrop). AI generated voice scams have surged 148 percent in 2025, with major retailers reporting over 1,000 AI scam calls per day. Synthetic voices no longer carry the obvious flaws that once made them easy to detect. CEO fraud using voice clones now targets at least 400 companies daily.
AI generated phishing is harvesting credentials at unprecedented scale. AI crafted phishing emails achieve 54 percent click through rates compared to 12 percent for traditional phishing, making them 4.5 times more effective (Microsoft Digital Defense Report). Microsoft estimates AI can make phishing operations up to 50 times more profitable through higher engagement and automation efficiency. The FBI has warned that AI greatly increases the speed, scale, and automation of phishing schemes. Over the holiday season in late 2025, AI generated phishing attacks surged 14 fold, representing 56 percent of all reported phishing attacks (Hoxhunt).
The speed and scale of AI attacks outpace human defenses. AI tools allow threat actors to accelerate reconnaissance, create convincing phishing messages, and scale their operations far beyond what was previously possible (CrowdStrike 2026 Global Threat Report). Average breakout time for cyber intrusions has collapsed to just 29 minutes, with the fastest observed at 27 seconds from initial access to lateral movement. By the time security teams detect an incident, the damage is often done.
Figure: Average Breakout Time Is Shrinking — CrowdStrike 2026 Global Threat Report.
Average Breakout Time Is Shrinking
Why Traditional Defenses Are Failing
Passwords remain the dominant login method, and they are still easily compromised. The Verizon 2025 DBIR found that stolen credentials were the initial access vector in 22 percent of breaches, more than any other category. In basic web application attacks, 88 percent involved stolen credentials. Analysis shows that only 3 percent of compromised passwords met basic complexity requirements. Credential stuffing now accounts for 19 percent of all authentication attempts at the median enterprise, rising to 25 percent at large organizations.
SMS and app based one time codes are vulnerable at every step. SIM swapping, real time phishing, and social engineering all defeat these controls. Prompt bombing, where users are bombarded with MFA requests until they approve one out of frustration, appeared in 14 percent of incidents in the 2025 DBIR. Adversary in the middle attacks intercept both passwords and session tokens after legitimate MFA authentication. Phishing as a service kits like Tycoon2FA and EvilProxy are specifically designed to bypass modern MFA controls.
Biometrics, once seen as the answer, are now being defeated by synthetic media. Deepfakes now account for 40 percent of all biometric fraud attempts. One in 20 identity verification failures in 2025 is linked to deepfake usage (Keepnet Labs). Attackers use face swap deepfakes and inject pre recorded or real time manipulated video streams via virtual cameras to fool liveness detection. The technology gap between attack and defense is widening.
The core issue is that these methods assume attackers are human. When AI can perfectly replicate a face, a voice, or a writing style, authentication that relies on “something you are” or “something you know” becomes fundamentally compromised. Enterprises need identity and authentication that AI cannot fake.
Why Photolok Addresses the AI Identity and Authentication Threat
Photolok is not another point solution. It is a Passwordless Identity Provider (IdP) and authentication server that functions as the front door for your apps and systems. It is not a SaaS product. Photolok works with existing systems including Okta Workforce and other identity platforms. As an identity provider, Photolok verifies user identities before granting access to any application. By replacing passwords at this identity layer, Photolok secures authentication across every app and system where your employees use Photolok. The apps themselves never see or store credentials. They simply trust Photolok’s verification.
What makes Photolok different is that it gives AI attackers nothing to work with:
Because Photolok sits at the identity provider layer, it complements existing fraud analytics, transaction monitoring, and security controls.
What Security Leaders Should Do Now
The Bottom Line
AI has broken the identity and authentication model that enterprises have relied on for decades. Passwords are stolen at scale. Biometrics are defeated by deepfakes. Voice verification falls to cloning. Attacks now unfold in minutes, not hours. The methods designed for human attackers cannot withstand AI powered adversaries.
The strategic response is to adopt authentication that gives AI nothing to exploit. Photolok Passwordless IdP replaces passwords and biometrics with photo based, session specific identity and authentication that cannot be cloned, or replayed. It integrates with existing platforms like Okta Workforce.
Want to see how Photolok can protect your organization against AI powered authentication attacks?
Request Your Personalized Demo
About the Author
Kasey Cromer is Director of Customer Experience at Netlok.
Sources
[1] Gartner. “Predicts 30% of Enterprises Will Consider Identity Verification Unreliable Due to Deepfakes by 2026.” gartner.com
[2] Verizon. “2025 Data Breach Investigations Report.” verizon.com/dbir
[3] Fortune. “2026 Will Be the Year You Get Fooled by a Deepfake.” December 2025. fortune.com
[4] Pindrop. “2025 Voice Intelligence and Security Report.” pindrop.com
[5] Microsoft. “Digital Defense Report 2025.” microsoft.com
[6] CrowdStrike. “2026 Global Threat Report.” crowdstrike.com
[7] Keepnet Labs. “Deepfake Statistics and Trends 2026.” keepnetlabs.com
[8] Hoxhunt. “Phishing Trends Report 2026.” hoxhunt.com
[9] World Economic Forum. “Global Cybersecurity Outlook 2025.” weforum.org
[10] Netlok. “How Photolok Works.” netlok.com
Why Passwords and Biometrics are Failing in 2026
Kasey Cromer, Netlok | March 18, 2026 Executive Summary The identity and authe[...more]
Pig Butchering Has Gone Big Time. Your Identity Layer Has to Catch Up.
Kasey Cromer, Netlok | February 28, 2026 Executive Summary “Pig butchering” refers[...more]
Your Workforce Runs on Apps. So Do Attackers.
Kasey Cromer, Netlok | February 23, 2026 Executive Summary Your employees rely on dozens of mo[...more]
When AI Becomes the Con Artist
Kasey Cromer, Netlok | February 12, 2026 Executive Summary Social engineering has always explo[...more]
Agentic AI in the Enterprise: The Security Guide Nobody Gave You
Kasey Cromer, Netlok | January 27, 2026 Executive Summary Autonomous AI agents are now executing cod[...more]
Workplace Security in 2026: When AI, Insiders, and Remote Work Collide
Kasey Cromer, Netlok | January 15, 2026 Executive Summary The uncomfortable truth about workplace se[...more]
How Insider Threats Bypass Security: Why Traditional Authentication Fails in the AI Era
Kasey Cromer, Netlok | January 5, 2026 Executive Summary Insider threats now cost an average of $17.[...more]
Authentication at a Crossroads: Preparing for the AI-Powered Threat Landscape of 2026 and Beyond
Kasey Cromer, Netlok | December 4, 2025 Series Recap Part 1 (November 14, 2025) took a deeper dive i[...more]
The $40 Billion Crisis: How AI-Powered Fraud Is Overwhelming Enterprise Security Teams
Kasey Cromer, Netlok | November 21, 2025 Executive Summary Global cybercrime is now a $10.5 trillion[...more]