Kasey Cromer, Netlok | May 13, 2026
As we move through 2026, the corporate world is facing an existential crisis of trust. The ‘identity surface’ has exploded, but the tools we use to defend it are crumbling under the weight of generative AI. For years, we relied on the human eye and ear as the ultimate backstop for security. We believed that if we could see a person’s face, hear their voice, or provide the correct password or codes, we knew who they were.
That era is over.
Today, deepfakes allow bad actors to impersonate anyone — from a frontline support agent to the CFO — with terrifying precision. Legacy authentication methods like biometrics and SMS codes were never designed to withstand AI powered impersonation. To survive this era, executives and investors must stop viewing identity as an application feature and start viewing it as a foundational layer. Photolok provides that layer, offering a visual identity solution that remains resilient even when faces and voices can no longer be trusted.
In early 2024, global engineering firm Arup’s Hong Kong office lost approximately $25 million after an employee joined a video conference where every other participant — including the CFO — was a deepfake. The attacker used AI generated video and audio to convincingly simulate multiple executives and authorize a series of fraudulent transfers.
(Source: CNN, February 2024, SCMP initial report, February 2024, SCMP follow‑up naming Arup, May 2024, CNN follow‑up naming Arup, May 2024)
At the time, many viewed this as an outlier. By 2026, it has become a standard tactic.
Several high profile incidents since then illustrate how quickly synthetic identity attacks have matured:
What changed between then and now is scale, cost, and realism.
According to CrowdStrike’s 2026 Global Threat Report, AI powered impersonation attacks increased by 89% year over year from 2024 to 2025, with continued acceleration into 2026.
The FBI’s Internet Crime Complaint Center (IC3) reported that business email compromise alone accounted for over $3 billion in losses in 2025, with total cyber crime losses exceeding $20 billion — a 26% increase from the prior year.
This is no longer about isolated fraud attempts. It is a systemic shift. Attackers are no longer breaking into systems — they are manipulating people first through social engineering, then using that legitimate access to exfiltrate sensitive data at machine speed, not human speed.
The same technology targeting CFOs is now targeting your employees at home. In 2026, deepfake celebrity scams on Facebook and TikTok have become industrialized — AI generated videos of Taylor Swift promoting fake investment schemes, doctors appearing to endorse miracle cures. According to Surfshark, celebrity and public figure impersonations account for more than half of all financial damage linked to deepfakes. Employees conditioned to trust video content in their personal lives carry that assumption into the workplace. When a deepfake ‘executive’ calls with an urgent request, the instinct is to comply — because seeing has always meant believing.
The industry is now confronting what can be described as the biometric paradox. For years, biometrics were positioned as the gold standard because they were ‘uniquely you.’ In 2026, that uniqueness is a liability.
Publicly available data — social media videos, earnings calls, podcast appearances — provides the perfect training set for AI systems to replicate voice and facial patterns.
The numbers reinforce how quickly this model is breaking. Across recent reports, a consistent pattern emerges:
Meanwhile, attackers who flood users with repeated MFA approval requests create fatigue that overrides caution — and continue to succeed because the vulnerability is human, not technical. When combined with a convincing deepfake voice or video, even security aware employees comply.
Traditional identity and access management was built for a world of static credentials and human users. It has no answer for synthetic voices, fabricated video, or AI generated impersonation.
We are entering a period where ‘seeing is believing’ is no longer just outdated — it is dangerous.
This erosion of trust creates friction across every business function:
Microsoft’s Digital Defense Report highlights that nation state groups are already experimenting with synthetic media for influence and intrusion campaigns.
When the visual layer is compromised, the entire remote work security model begins to collapse.
To solve this, we must address a fundamental misunderstanding in the cybersecurity industry.
There is a critical distinction the security industry often fails to make: identity providers are not applications. An identity provider is the system that verifies who you are before you ever touch an application. It issues the proof of identity that connected applications rely on. Photolok is an identity provider — not just another tool in the SaaS portfolio, but the layer that sits beneath all of them.
The average enterprise now manages more than 305 SaaS (software as a service) applications. (Source: Okta Businesses at Work Report, January 2026) When identity is treated as just another app within that sprawl, it fails. Identity is not an app — it is the layer beneath the apps.
This distinction matters more in the AI era because SaaS apps are multiplying rapidly, AI agents are becoming new identity actors, and authentication must happen before any interaction with systems or agents.
Photolok is a passwordless identity provider built for a world where biometrics can be faked. It meets 2026 standards for phishing resistant authentication while adding something most identity tools ignore: protection for the person, not just the credential. Photolok integrates with platforms like Okta Workforce and acts as the secure front door for your online environment. Users must prove who they are before accessing any application or interacting with AI systems.
Photolok shifts authentication away from public, replicable signals toward private, human knowledge that AI cannot infer.
The shift to AI driven impersonation requires more than incremental fixes. It requires rethinking how identity is verified across the organization.
Each of these steps addresses a different failure point exposed by AI driven impersonation — not just technical vulnerabilities, but human ones.
In 2026, the identity surface is the new perimeter.
As attackers use AI to manufacture trust and exfiltrate sensitive data at machine speed, not human speed, organizations must respond with identity systems designed for this new reality.
Photolok represents a shift away from spoofable biometrics and toward private, human centered authentication. It restores trust not by improving detection, but by changing what is being verified.
We are no longer just protecting systems. We are protecting people. And in an era where anyone can be faked, that distinction matters more than ever.
Request Your Personalized Demo
Kasey Cromer is Director of Customer Experience at Netlok.
[1] CrowdStrike. ‘2026 Global Threat Report.’ February 2026. crowdstrike.com/global-threat-report
[2] Adaptive Security. ‘Voice Cloning Threat Report.’ 2026. adaptivesecurity.com
[3] Microsoft. ‘Digital Defense Report 2025.’ October 2025. microsoft.com
[4] IRONSCALES. ‘Fall 2025 Threat Report: Beyond Detection.’ October 2025. ironscales.com
[5] FBI IC3. ‘2025 Internet Crime Report.’ February 2026. ic3.gov
[6] Pindrop. ‘Voice Intelligence Report.’ March 2026. pindrop.com
[7] Mandiant. ‘M-Trends 2026.’ November 2025. mandiant.com
[8] Okta. ‘Businesses at Work Report.’ January 2026. okta.com
[9] DeepMedia. ‘State of Deepfake Detection.’ February 2026. deepmedia.ai
[10] iProov. ‘Threat Intelligence Report 2025.’ January 2025. iproov.com
[11] Surfshark. ‘Deepfake Statistics.’ 2025. surfshark.com
[12] Netlok. ‘How Photolok Works.’ netlok.com
When anyone can be faked: Photolok as the identity layer for the AI era
Kasey Cromer, Netlok | May 13, 2026 Executive summary As we move through 2026, the corporate world i[...more]
App Overload: Why SaaS apps and AI Sprawl Are Breaking Enterprise Security
Kasey Cromer, Netlok | April 29, 2026 Executive summary In 2026, most enterprises are running more a[...more]
Identity Crisis: When Attackers Log In Instead of Break In
Kasey Cromer, Netlok | April 10, 2026 Executive Summary Geopolitical escalation reliably coincides w[...more]
Protecting the Person, Not Just the Account
Kasey Cromer, Netlok | March 31, 2026 Executive Summary Traditional authentication was designed to a[...more]
Why Passwords and Biometrics are Failing in 2026
Kasey Cromer, Netlok | March 18, 2026 Executive Summary The identity and authe[...more]
Pig Butchering Has Gone Big Time. Your Identity Layer Has to Catch Up.
Kasey Cromer, Netlok | February 28, 2026 Executive Summary “Pig butchering” refers[...more]
Your Workforce Runs on Apps. So Do Attackers.
Kasey Cromer, Netlok | February 23, 2026 Executive Summary Your employees rely on dozens of mo[...more]
When AI Becomes the Con Artist
Kasey Cromer, Netlok | February 12, 2026 Executive Summary Social engineering has always explo[...more]
Agentic AI in the Enterprise: The Security Guide Nobody Gave You
Kasey Cromer, Netlok | January 27, 2026 Executive Summary Autonomous AI agents are now executing cod[...more]