Post Thumbnail

When anyone can be faked: Photolok as the identity layer for the AI era

Kasey Cromer, Netlok | May 13, 2026

Executive summary

As we move through 2026, the corporate world is facing an existential crisis of trust. The ‘identity surface’ has exploded, but the tools we use to defend it are crumbling under the weight of generative AI. For years, we relied on the human eye and ear as the ultimate backstop for security. We believed that if we could see a person’s face, hear their voice, or provide the correct password or codes, we knew who they were.

That era is over.

Today, deepfakes allow bad actors to impersonate anyone — from a frontline support agent to the CFO — with terrifying precision. Legacy authentication methods like biometrics and SMS codes were never designed to withstand AI powered impersonation. To survive this era, executives and investors must stop viewing identity as an application feature and start viewing it as a foundational layer. Photolok provides that layer, offering a visual identity solution that remains resilient even when faces and voices can no longer be trusted.

The deepfake threat: what changed

In early 2024, global engineering firm Arup’s Hong Kong office lost approximately $25 million after an employee joined a video conference where every other participant — including the CFO — was a deepfake. The attacker used AI generated video and audio to convincingly simulate multiple executives and authorize a series of fraudulent transfers. 
(Source: CNN, February 2024SCMP initial report, February 2024SCMP follow‑up naming Arup, May 2024CNN follow‑up naming Arup, May 2024)

At the time, many viewed this as an outlier. By 2026, it has become a standard tactic.

Several high profile incidents since then illustrate how quickly synthetic identity attacks have matured:

  • Ferrari — 2024: Executives reported a voice cloning attempt impersonating CEO Benedetto Vigna via WhatsApp, requesting urgent financial action. The attempt failed after verification questions exposed the fraud. (Source: Bloomberg, July 2024)
  • WPP — 2024: The CEO publicly confirmed a deepfake scam attempt involving a synthetic voice and video call targeting executives for financial fraud. (Source: The Guardian, May 2024)
  • UK energy firm — 2019 (early precursor): $243,000 lost via AI generated voice impersonation of the CEO. This technique has since scaled dramatically with generative AI. (Source: Wall Street Journal)

What changed between then and now is scale, cost, and realism.

According to CrowdStrike’s 2026 Global Threat Report, AI powered impersonation attacks increased by 89% year over year from 2024 to 2025, with continued acceleration into 2026.

The FBI’s Internet Crime Complaint Center (IC3) reported that business email compromise alone accounted for over $3 billion in losses in 2025, with total cyber crime losses exceeding $20 billion — a 26% increase from the prior year.

This is no longer about isolated fraud attempts. It is a systemic shift. Attackers are no longer breaking into systems — they are manipulating people first through social engineering, then using that legitimate access to exfiltrate sensitive data at machine speed, not human speed.

The same technology targeting CFOs is now targeting your employees at home. In 2026, deepfake celebrity scams on Facebook and TikTok have become industrialized — AI generated videos of Taylor Swift promoting fake investment schemes, doctors appearing to endorse miracle cures. According to Surfshark, celebrity and public figure impersonations account for more than half of all financial damage linked to deepfakes. Employees conditioned to trust video content in their personal lives carry that assumption into the workplace. When a deepfake ‘executive’ calls with an urgent request, the instinct is to comply — because seeing has always meant believing.

Why legacy authentication cannot keep up

The industry is now confronting what can be described as the biometric paradox. For years, biometrics were positioned as the gold standard because they were ‘uniquely you.’ In 2026, that uniqueness is a liability.

Publicly available data — social media videos, earnings calls, podcast appearances — provides the perfect training set for AI systems to replicate voice and facial patterns.

The numbers reinforce how quickly this model is breaking. Across recent reports, a consistent pattern emerges:

  • Voice cloning: AI models can now replicate a voice with high fidelity using as little as 3 seconds of audio, with some tools achieving 85% speaker similarity from brief samples. (Source: Adaptive Security, 2026)
  • Voice authentication bypass: Deepfake fraud attempts in contact centers surged over 1,300% in 2024, with synthetic voices now capable of fooling legacy voice biometric systems. (Source: Pindrop 2025 Voice Intelligence Report)
  • Facial recognition spoofing: Deepfake attacks that feed synthetic video directly into authentication systems increased 2,665% in 2024, with face swap attacks surging 300% compared to 2023. (Source: iProov 2025 Threat Intelligence Report)
  • Deepfake proliferation: The volume of deepfake content online is doubling approximately every 6 months. (Source: DeepMedia, February 2026)
  • Enterprise concern: 85% of organizations experienced one or more deepfake related incidents in the past 12 months, yet only 42% feel confident in their ability to defend against them. (Source: IRONSCALES Fall 2025 Threat Report)

Meanwhile, attackers who flood users with repeated MFA approval requests create fatigue that overrides caution — and continue to succeed because the vulnerability is human, not technical. When combined with a convincing deepfake voice or video, even security aware employees comply.

Traditional identity and access management was built for a world of static credentials and human users. It has no answer for synthetic voices, fabricated video, or AI generated impersonation.

The erosion of visual trust

We are entering a period where ‘seeing is believing’ is no longer just outdated — it is dangerous.

This erosion of trust creates friction across every business function:

  • Remote onboarding: Organizations cannot reliably verify that the person on screen matches the identity being claimed
  • High value transactions: Financial approvals over video or voice become inherently suspect
  • Executive communication: Board level instructions delivered via recorded video can no longer be assumed authentic

Microsoft’s Digital Defense Report highlights that nation state groups are already experimenting with synthetic media for influence and intrusion campaigns.

When the visual layer is compromised, the entire remote work security model begins to collapse.

Rethinking identity for an AI world

To solve this, we must address a fundamental misunderstanding in the cybersecurity industry.

There is a critical distinction the security industry often fails to make: identity providers are not applications. An identity provider is the system that verifies who you are before you ever touch an application. It issues the proof of identity that connected applications rely on. Photolok is an identity provider — not just another tool in the SaaS portfolio, but the layer that sits beneath all of them.

The average enterprise now manages more than 305 SaaS (software as a service) applications. (Source: Okta Businesses at Work Report, January 2026) When identity is treated as just another app within that sprawl, it fails. Identity is not an app — it is the layer beneath the apps.

This distinction matters more in the AI era because SaaS apps are multiplying rapidly, AI agents are becoming new identity actors, and authentication must happen before any interaction with systems or agents.

Photolok is a passwordless identity provider built for a world where biometrics can be faked. It meets 2026 standards for phishing resistant authentication while adding something most identity tools ignore: protection for the person, not just the credential. Photolok integrates with platforms like Okta Workforce and acts as the secure front door for your online environment. Users must prove who they are before accessing any application or interacting with AI systems.

How Photolok addresses the deepfake gap

Photolok shifts authentication away from public, replicable signals toward private, human knowledge that AI cannot infer.

  • Photo based authentication. Users identify images from photo panels. These photos are not publicly available and cannot be predicted or replicated by AI.
  • 1 Time Photo. Users authenticate using single use images that never repeat. Only the first panel appears during login — regular photos stay hidden. Even if an attacker records the session or captures the screen, they gain nothing reusable. There is no pattern for an AI to learn, no credential to replay.
  • Duress Photo. In an era where a deepfake executive can pressure an employee to ‘just log in and approve this,’ Duress Photo provides a silent alarm. It looks like a normal login but triggers a real time alert to security teams. The coercer sees access granted; responders see a distress signal.

What security leaders should do now

The shift to AI driven impersonation requires more than incremental fixes. It requires rethinking how identity is verified across the organization.

  1. Mandate out of band visual anchors. Require identity verification through a trusted identity layer before approving sensitive actions. Video and voice alone should not be sufficient.
  2. Audit the identity layer vs the app layer. Map how users authenticate across systems. Any path that allows direct login into SaaS apps without centralized identity verification creates exploitable gaps.
  3. Implement zero trust for internal communications. Treat all internal video, voice, and messaging as unverified until proven otherwise. Trust must be established through identity systems, not perception.
  4. Adopt one-time authentication methods. In settings where image-based authentication may be observed or recorded, static credentials — passwords, biometrics — can be captured and reused. One-time authentication methods eliminate reuse and reduce attacker ROI.
  5. Train for coercion scenarios. Simulate deepfake driven attacks where employees are pressured in real time. Ensure tools and processes provide a safe, silent way to escalate or signal distress.

Each of these steps addresses a different failure point exposed by AI driven impersonation — not just technical vulnerabilities, but human ones.

The bottom line

In 2026, the identity surface is the new perimeter.

As attackers use AI to manufacture trust and exfiltrate sensitive data at machine speed, not human speed, organizations must respond with identity systems designed for this new reality.

Photolok represents a shift away from spoofable biometrics and toward private, human centered authentication. It restores trust not by improving detection, but by changing what is being verified.

We are no longer just protecting systems. We are protecting people. And in an era where anyone can be faked, that distinction matters more than ever.

Request Your Personalized Demo

About the author

Kasey Cromer is Director of Customer Experience at Netlok.

Sources

[1] CrowdStrike. ‘2026 Global Threat Report.’ February 2026. crowdstrike.com/global-threat-report

[2] Adaptive Security. ‘Voice Cloning Threat Report.’ 2026. adaptivesecurity.com

[3] Microsoft. ‘Digital Defense Report 2025.’ October 2025. microsoft.com

[4] IRONSCALES. ‘Fall 2025 Threat Report: Beyond Detection.’ October 2025. ironscales.com

[5] FBI IC3. ‘2025 Internet Crime Report.’ February 2026. ic3.gov

[6] Pindrop. ‘Voice Intelligence Report.’ March 2026. pindrop.com

[7] Mandiant. ‘M-Trends 2026.’ November 2025. mandiant.com

[8] Okta. ‘Businesses at Work Report.’ January 2026. okta.com

[9] DeepMedia. ‘State of Deepfake Detection.’ February 2026. deepmedia.ai

[10] iProov. ‘Threat Intelligence Report 2025.’ January 2025. iproov.com

[11] Surfshark. ‘Deepfake Statistics.’ 2025. surfshark.com

[12] Netlok. ‘How Photolok Works.’ netlok.com

More Articles