Kasey Cromer, Netlok | November December 4, 2025
Part 1 (November 14, 2025) took a deeper dive into the deepfake epidemic itself—the $25 million video call scams, the 1,000%+ increase in attacks since 2023, and why human detection capabilities are failing at a 75% rate. We examined why detection alone cannot win this arms race and outlined an enterprise defense framework.
In Part 2 (November 21, 2025) of this series, we examined the staggering scope of AI-powered fraud—a $40 billion crisis by 2027 that is overwhelming enterprise security teams. We explored how generative AI has transformed the fraud landscape, with 93% of financial institutions expressing serious concern about AI-driven fraud acceleration and deepfake incidents surging by 700%.
Now, in this concluding installment, we look ahead to how these same dynamics will reshape authentication between 2026 and 2028—and what security leaders can do today to get ahead of that curve. The threats documented in Parts 1 and 2 are not static; they are accelerating. As we approach 2026, enterprises face a critical turning point where the convergence of advancing AI capabilities and expanding data exposure creates unprecedented authentication challenges. The question is no longer whether to evolve your security posture, but how quickly you can implement defenses designed for the threats of tomorrow. You can’t afford to wait.
The authentication landscape stands at an inflection point. Forrester predicts an agentic AI deployment that will cause a publicly disclosed breach in 2026, while Gartner warns that by 2027. The key point is AI agents will reduce the time to exploit account exposures by 50%. As deepfake technology becomes increasingly accessible and massive data exposures amplify attacker capabilities, traditional authentication methods face obsolescence. This article examines the converging threats shaping 2026 and beyond—and demonstrates why Netlok’s Photolok, with its patented steganography, AI/ML defense, and unique user security features like Duress Photo and 1-Time Use Photo, represents the authentication paradigm shift enterprises require.
The next 12 to 24 months will reshape enterprise cybersecurity in fundamental ways-Leading research firms to issue stark warnings about what lies ahead.
Forrester’s Predictions 2026: Cybersecurity and Risk report forecasts that an agentic AI deployment will cause a publicly disclosed data breach next year, leading to employee dismissals. As organizations rush to build agentic AI workflows, the lack of proper guardrails means autonomous AI agents may sacrifice accuracy for speed—creating systemic vulnerabilities that cascade across enterprises [1].
Gartner’s analysis is equally sobering. By 2027, AI agents will accelerate the time it takes threat actors to hijack exposed accounts by 50%. The firm also predicts that 40% of social engineering attacks will target executives as well as the broader workforce by 2028, with attackers combining social engineering tactics with deepfake audio and video to deceive employees during calls [2]. Perhaps most alarming: by 2028, 25% of enterprise breaches will be traced back to AI agent abuse from both external and malicious internal actors [3].
The World Economic Forum reinforces these concerns, noting that deepfake fraud cases surged 1,740% in North America between 2022 and 2023, with financial losses exceeding $200 million in Q1 2025 alone [4]. Voice cloning now requires just 20-30 seconds of audio, while convincing video deepfakes can be created in 45 minutes using freely available software.
| Prediction | Source |
| Agentic AI will cause a public breach in 2026 | Forrester [1] |
| AI agents will reduce account exploit time by 50% by 2027 | Gartner [2] |
| 30% of enterprises will consider standalone IDV unreliable by 2026 | Gartner [5] |
| 25% of enterprise breaches traced to AI agent abuse by 2028 | Gartner [3] |
| Deepfake fraud projected to surge 162% in 2025 | Pindrop [6] |
The AI fraud threat does not exist in isolation. Its potency is directly amplified by the availability of personal data. When attackers possess comprehensive personal information—names, dates of birth, addresses, Social Security numbers, family relationships—AI-powered fraud becomes exponentially more dangerous and convincing.
Recent events have underscored how data handling practices can dramatically increase this risk. In August 2025, a whistleblower complaint revealed that personal information belonging to more than 300 million Americans had been copied to a cloud environment with reduced security controls. According to reporting from NPR and other outlets, career cybersecurity officials described the situation as “very high risk,” with one internal assessment warning of a potential “catastrophic impact” and noting the possibility of having to reissue Social Security numbers to millions of Americans in the event of a breach [7][8]. The Social Security Administration has stated that it is not aware of any compromise and that data is stored in secure environments with robust safeguards—but the episode underscores how concentrated datasets can amplify identity theft risk if controls fail.
This scenario illustrates a broader concern: as massive datasets containing sensitive personal information become more accessible—whether through breaches, mishandling, or inadequate security—AI-powered attackers gain richer raw material for their schemes. Cybersecurity experts have warned that if bad actors gained access to comprehensive personal information, they could create holistic profiles that enable highly convincing impersonation attacks [9]. The combination of detailed personal data and sophisticated deepfake technology creates what researchers have characterized as a “perfect storm” for identity fraud.
For enterprises, this means authentication systems must assume attackers may already possess significant knowledge about their targets. Traditional knowledge-based authentication—security questions, personal details, even voice recognition—becomes increasingly unreliable when attackers can synthesize convincing responses using AI trained on exposed data.
The fundamental challenge facing enterprises is that authentication methods designed for a pre-AI world are now being systematically dismantled by AI-powered attacks.
Gartner has predicted that by 2026, attacks using AI-generated deepfakes on face biometrics will mean that 30% of enterprises will no longer consider identity verification and authentication solutions reliable in isolation [5]. This represents a seismic shift in enterprise security posture—nearly one-third of organizations abandoning confidence in their existing authentication stack, with direct implications for regulatory exposure, cyber insurance, and board risk oversight.
According to Entrust’s 2026 Identity Fraud Report, deepfakes now account for one in five biometric fraud attempts, with deepfaked selfies increasing by 58% in 2025 and injection attacks surging 40% year-over-year [10]. The report notes that coercion attacks are particularly difficult to detect because victims use their own genuine documents and biometrics—only under pressure or instruction from someone else. The report’s conclusion is blunt: “We’ve crossed a threshold where humans simply can’t rely on their senses anymore.”
The passwordless movement, while representing progress, does not fully address these challenges. A recent CNBC report notes that 92% of CISOs have implemented or are planning passwordless authentication—up from 70% in 2024 [11]. However, many passwordless solutions rely on biometrics that are increasingly vulnerable to deepfake attacks, or on device-based authentication that can be compromised through social engineering.
What enterprises need is not simply “passwordless” authentication, but authentication that is fundamentally resistant to AI-powered attacks—systems where there is no pattern for AI to learn, no biometric to fake, and no knowledge to extract.
Photolok represents a fundamentally different approach to authentication—one designed from the ground up to resist the AI-powered threats that are rendering traditional methods obsolete.
At its core, Photolok is a passwordless authentication solution using patented steganographic photos. Rather than relying on passwords, biometrics, or knowledge-based verification, users authenticate by selecting their coded photos during login. This approach delivers what Netlok describes as “UltraSafe AI/ML login protection” when compared to passwords, passkeys, and biometrics.
Photolok’s AI/ML Defense capability prevents artificial intelligence and machine learning attacks through a simple but powerful principle: randomization. All account photos are randomly placed in photo panels during each login. Because there is no consistent pattern—no predictable sequence or positioning—bots cannot identify which photographs to attack. This randomized, non-predictable login experience deprives agentic AI of the consistent patterns and replayable signals it needs to optimize attacks over time. This fundamentally differs from biometrics (which present a consistent target), passwords (which can be captured or guessed), and behavioral patterns (which can be learned and mimicked).
In an era of sophisticated social engineering and physical coercion attacks, Photolok offers a capability no other authentication system provides: the Duress Photo.
Photolok is the only login method that uses a “visual silent alarm.” When an account owner feels they are in danger or is being forced to provide access to a bad actor, they can activate the Duress Security Alert by selecting their designated Duress photo in the first photo panel. When clicked, an email and text notification are sent immediately to IT security and other designated personnel—all while allowing the user to continue to login to their destination without any disruption that might alert the attacker.
This capability addresses a critical gap in enterprise security. As Entrust notes, coercion attacks are particularly hard to detect because victims use their own documents and biometrics under pressure; a Duress Photo gives those victims a safe, covert signal path that traditional biometrics simply do not offer. For example, if a finance leader is pressured into disclosing confidential information on a video call by a convincing deepfake impersonation, they can silently trigger Duress while “complying” with the request—enabling immediate response from security teams while preventing the authorization of fraudulent transactions.
Photolok is also the only login method that gives users the option of using a temporary photo to prevent shoulder surfing in office or public settings. The 1-Time Use Photo provides enhanced remote security by automatically removing itself from the user’s account after a single use.
If someone is using a camera, screen capture malware, or simply looking over a user’s shoulder, the 1-Time Photo protects the account because it becomes invalid immediately after use. If someone records or screenshares a login session, that 1-Time Use Photo is useless on the next attempt. This feature is particularly valuable for remote workers, traveling executives, and any scenario where login activity might be observed—addressing vulnerabilities that traditional authentication methods cannot mitigate.
Beyond these distinctive capabilities, Photolok incorporates additional security measures, including integration with existing authenticators for access codes, device authorization controls, and patented steganography that embeds encrypted codes within photos—making them highly resistant to external observation and AI analysis. The system also simplifies adoption across diverse user groups, eliminating language and literacy barriers that can limit the effectiveness of text-based authentication.
The World Economic Forum has stated plainly that traditional verification methods are “no longer sufficient” against AI-enabled fraudsters [12]. That aligns with the conclusion from Part 1 of this series: detection alone cannot close the gap against adaptive, AI-enabled adversaries; the underlying authentication factor must change. Global cybercrime now represents a $10.5 trillion industry—larger than the GDP of every country except the United States and China. Deloitte projects AI-enabled fraud losses in the U.S. will reach $40 billion by 2027.
The research is clear: enterprises that delay authentication modernization face mounting risk—in incident costs, regulatory exposure, and erosion of customer trust. As this three-part series has documented, the AI fraud threat is not theoretical—it is present, accelerating, and systematically defeating legacy security measures.
The choice facing enterprise leaders is straightforward: evolve authentication now, implement systems designed for AI-era threats, or become another statistic in the growing tally of successful AI-powered attacks. Photolok’s patented steganography technology, combined with unique security features like Duress Photo and 1-Time Use Photo, offers a proven path forward—authentication that protects against the threats of 2026 and beyond.
Don’t wait for AI-powered fraudsters to target your organization. Discover how Photolok’s patented steganography and AI-resistant authentication can protect your enterprise while improving user experience.
Request Your Personalized Demo
Author: Kasey Cromer is Director of Customer Experience at Netlok.
[1] Predictions 2026: Cybersecurity and Risk — Forrester (October 2025)
[4] Detecting dangerous AI is essential in the deepfake era — World Economic Forum (July 2025)
[6] Deepfake Fraud Could Surge 162% in 2025 — Pindrop (July 2025)
[7] Whistleblower says DOGE put Social Security numbers at risk — NPR (August 2025)
[9] Whistleblower: DOGE Put Millions of Americans’ Data at Risk — TIME (August 2025)
[11] More companies are shifting workers to passwordless authentication — CNBC (November 2025)
[12] AI-driven cybercrime is growing, here’s how to stop it — World Economic Forum (January 2025)
[13] Deepfake Statistics & Trends 2025 — Keepnet Labs (November 2025)
[14] AI-powered fraud is exploding — Cybernews/Entrust (November 2025)
[15] Forrester: Agentic AI-Powered Breach Will Happen in 2026 — Infosecurity Magazine (October 2025)
Authentication at a Crossroads: Preparing for the AI-Powered Threat Landscape of 2026 and Beyond
Kasey Cromer, Netlok | November December 4, 2025 Series Recap Part 1 (November 14, 2025) took a deep[...more]
The $40 Billion Crisis: How AI-Powered Fraud Is Overwhelming Enterprise Security Teams
Kasey Cromer, Netlok | November 21, 2025 Executive Summary Global cybercrime is now a $10.5 trillion[...more]
AI Deepfakes: Enterprise Security Crisis Demanding New Authentication
Kasey Cromer, Netlok | November 14, 2025 Executive Summary A single deepfake video call cost a multi[...more]
Your Personal Data Was Just Stolen: Here’s Your 24-Hour Response Plan
Kasey Cromer, Netlok | October 6, 2025 Executive Summary 2025 is setting new records for cyberattack[...more]
Wrench attacks average more than 1 incident per week in 2025
K. Cromer, Netlok 9/8/2025 This analysis builds on Netlok’s ongoing research into wrench attac[...more]
Measuring MFA’s Defensive Muscle in 2025
A.R. Perez, Netlok. 7/8/2025 Multi-factor authentication (MFA) was once hailed as a near-perfect shi[...more]
The Rise of Deepfakes and Synthetic IDs Challenge Biometric Login Solutions
A.R. Perez, Netlok, July 1,2025 Understanding the Threat Landscape The emergence of sophisticated de[...more]
Password Theft Enables Faster and Broader User Exploitation
A.R. Perez, Netlok, June 2025 To enhance their performance, bad actors favor methods that increase t[...more]
Hackers Prefer Password Theft to Direct Technical Exploits
A.R. Perez, Netlok, June 24, 2025 Like most people and organizations, cybercriminals value their tim[...more]