Kasey Cromer, Netlok | November December 4, 2025
Part 1 (November 14, 2025) took a deeper dive into the deepfake epidemic itself—the $25 million video call scams, the 1,000%+ increase in attacks since 2023, and why human detection capabilities are failing at a 75% rate. We examined why detection alone cannot win this arms race and outlined an enterprise defense framework.
In Part 2 (November 21, 2025) of this series, we examined the staggering scope of AI-powered fraud—a $40 billion crisis by 2027 that is overwhelming enterprise security teams. We explored how generative AI has transformed the fraud landscape, with 93% of financial institutions expressing serious concern about AI-driven fraud acceleration and deepfake incidents surging by 700%.
Now, in this concluding installment, we look ahead to how these same dynamics will reshape authentication between 2026 and 2028—and what security leaders can do today to get ahead of that curve. The threats documented in Parts 1 and 2 are not static; they are accelerating. As we approach 2026, enterprises face a critical turning point where the convergence of advancing AI capabilities and expanding data exposure creates unprecedented authentication challenges. The question is no longer whether to evolve your security posture, but how quickly you can implement defenses designed for the threats of tomorrow. You can’t afford to wait.
The authentication landscape stands at an inflection point. Forrester predicts an agentic AI deployment that will cause a publicly disclosed breach in 2026, while Gartner warns that by 2027. The key point is AI agents will reduce the time to exploit account exposures by 50%. As deepfake technology becomes increasingly accessible and massive data exposures amplify attacker capabilities, traditional authentication methods face obsolescence. This article examines the converging threats shaping 2026 and beyond—and demonstrates why Netlok’s Photolok, with its patented steganography, AI/ML defense, and unique user security features like Duress Photo and 1-Time Use Photo, represents the authentication paradigm shift enterprises require.
The next 12 to 24 months will reshape enterprise cybersecurity in fundamental ways-Leading research firms to issue stark warnings about what lies ahead.
Forrester’s Predictions 2026: Cybersecurity and Risk report forecasts that an agentic AI deployment will cause a publicly disclosed data breach next year, leading to employee dismissals. As organizations rush to build agentic AI workflows, the lack of proper guardrails means autonomous AI agents may sacrifice accuracy for speed—creating systemic vulnerabilities that cascade across enterprises [1].
Gartner’s analysis is equally sobering. By 2027, AI agents will accelerate the time it takes threat actors to hijack exposed accounts by 50%. The firm also predicts that 40% of social engineering attacks will target executives as well as the broader workforce by 2028, with attackers combining social engineering tactics with deepfake audio and video to deceive employees during calls [2]. Perhaps most alarming: by 2028, 25% of enterprise breaches will be traced back to AI agent abuse from both external and malicious internal actors [3].
The World Economic Forum reinforces these concerns, noting that deepfake fraud cases surged 1,740% in North America between 2022 and 2023, with financial losses exceeding $200 million in Q1 2025 alone [4]. Voice cloning now requires just 20-30 seconds of audio, while convincing video deepfakes can be created in 45 minutes using freely available software.
| Prediction | Source |
| Agentic AI will cause a public breach in 2026 | Forrester [1] |
| AI agents will reduce account exploit time by 50% by 2027 | Gartner [2] |
| 30% of enterprises will consider standalone IDV unreliable by 2026 | Gartner [5] |
| 25% of enterprise breaches traced to AI agent abuse by 2028 | Gartner [3] |
| Deepfake fraud projected to surge 162% in 2025 | Pindrop [6] |
The AI fraud threat does not exist in isolation. Its potency is directly amplified by the availability of personal data. When attackers possess comprehensive personal information—names, dates of birth, addresses, Social Security numbers, family relationships—AI-powered fraud becomes exponentially more dangerous and convincing.
Recent events have underscored how data handling practices can dramatically increase this risk. In August 2025, a whistleblower complaint revealed that personal information belonging to more than 300 million Americans had been copied to a cloud environment with reduced security controls. According to reporting from NPR and other outlets, career cybersecurity officials described the situation as “very high risk,” with one internal assessment warning of a potential “catastrophic impact” and noting the possibility of having to reissue Social Security numbers to millions of Americans in the event of a breach [7][8]. The Social Security Administration has stated that it is not aware of any compromise and that data is stored in secure environments with robust safeguards—but the episode underscores how concentrated datasets can amplify identity theft risk if controls fail.
This scenario illustrates a broader concern: as massive datasets containing sensitive personal information become more accessible—whether through breaches, mishandling, or inadequate security—AI-powered attackers gain richer raw material for their schemes. Cybersecurity experts have warned that if bad actors gained access to comprehensive personal information, they could create holistic profiles that enable highly convincing impersonation attacks [9]. The combination of detailed personal data and sophisticated deepfake technology creates what researchers have characterized as a “perfect storm” for identity fraud.
For enterprises, this means authentication systems must assume attackers may already possess significant knowledge about their targets. Traditional knowledge-based authentication—security questions, personal details, even voice recognition—becomes increasingly unreliable when attackers can synthesize convincing responses using AI trained on exposed data.
The fundamental challenge facing enterprises is that authentication methods designed for a pre-AI world are now being systematically dismantled by AI-powered attacks.
Gartner has predicted that by 2026, attacks using AI-generated deepfakes on face biometrics will mean that 30% of enterprises will no longer consider identity verification and authentication solutions reliable in isolation [5]. This represents a seismic shift in enterprise security posture—nearly one-third of organizations abandoning confidence in their existing authentication stack, with direct implications for regulatory exposure, cyber insurance, and board risk oversight.
According to Entrust’s 2026 Identity Fraud Report, deepfakes now account for one in five biometric fraud attempts, with deepfaked selfies increasing by 58% in 2025 and injection attacks surging 40% year-over-year [10]. The report notes that coercion attacks are particularly difficult to detect because victims use their own genuine documents and biometrics—only under pressure or instruction from someone else. The report’s conclusion is blunt: “We’ve crossed a threshold where humans simply can’t rely on their senses anymore.”
The passwordless movement, while representing progress, does not fully address these challenges. A recent CNBC report notes that 92% of CISOs have implemented or are planning passwordless authentication—up from 70% in 2024 [11]. However, many passwordless solutions rely on biometrics that are increasingly vulnerable to deepfake attacks, or on device-based authentication that can be compromised through social engineering.
What enterprises need is not simply “passwordless” authentication, but authentication that is fundamentally resistant to AI-powered attacks—systems where there is no pattern for AI to learn, no biometric to fake, and no knowledge to extract.
Photolok represents a fundamentally different approach to authentication—one designed from the ground up to resist the AI-powered threats that are rendering traditional methods obsolete.
At its core, Photolok is a passwordless authentication solution using patented steganographic photos. Rather than relying on passwords, biometrics, or knowledge-based verification, users authenticate by selecting their coded photos during login. This approach delivers what Netlok describes as “UltraSafe AI/ML login protection” when compared to passwords, passkeys, and biometrics.
Photolok’s AI/ML Defense capability prevents artificial intelligence and machine learning attacks through a simple but powerful principle: randomization. All account photos are randomly placed in photo panels during each login. Because there is no consistent pattern—no predictable sequence or positioning—bots cannot identify which photographs to attack. This randomized, non-predictable login experience deprives agentic AI of the consistent patterns and replayable signals it needs to optimize attacks over time. This fundamentally differs from biometrics (which present a consistent target), passwords (which can be captured or guessed), and behavioral patterns (which can be learned and mimicked).
In an era of sophisticated social engineering and physical coercion attacks, Photolok offers a capability no other authentication system provides: the Duress Photo.
Photolok is the only login method that uses a “visual silent alarm.” When an account owner feels they are in danger or is being forced to provide access to a bad actor, they can activate the Duress Security Alert by selecting their designated Duress photo in the first photo panel. When clicked, an email and text notification are sent immediately to IT security and other designated personnel—all while allowing the user to continue to login to their destination without any disruption that might alert the attacker.
This capability addresses a critical gap in enterprise security. As Entrust notes, coercion attacks are particularly hard to detect because victims use their own documents and biometrics under pressure; a Duress Photo gives those victims a safe, covert signal path that traditional biometrics simply do not offer. For example, if a finance leader is pressured into disclosing confidential information on a video call by a convincing deepfake impersonation, they can silently trigger Duress while “complying” with the request—enabling immediate response from security teams while preventing the authorization of fraudulent transactions.
Photolok is also the only login method that gives users the option of using a temporary photo to prevent shoulder surfing in office or public settings. The 1-Time Use Photo provides enhanced remote security by automatically removing itself from the user’s account after a single use.
If someone is using a camera, screen capture malware, or simply looking over a user’s shoulder, the 1-Time Photo protects the account because it becomes invalid immediately after use. If someone records or screenshares a login session, that 1-Time Use Photo is useless on the next attempt. This feature is particularly valuable for remote workers, traveling executives, and any scenario where login activity might be observed—addressing vulnerabilities that traditional authentication methods cannot mitigate.
Beyond these distinctive capabilities, Photolok incorporates additional security measures, including integration with existing authenticators for access codes, device authorization controls, and patented steganography that embeds encrypted codes within photos—making them highly resistant to external observation and AI analysis. The system also simplifies adoption across diverse user groups, eliminating language and literacy barriers that can limit the effectiveness of text-based authentication.
The World Economic Forum has stated plainly that traditional verification methods are “no longer sufficient” against AI-enabled fraudsters [12]. That aligns with the conclusion from Part 1 of this series: detection alone cannot close the gap against adaptive, AI-enabled adversaries; the underlying authentication factor must change. Global cybercrime now represents a $10.5 trillion industry—larger than the GDP of every country except the United States and China. Deloitte projects AI-enabled fraud losses in the U.S. will reach $40 billion by 2027.
The research is clear: enterprises that delay authentication modernization face mounting risk—in incident costs, regulatory exposure, and erosion of customer trust. As this three-part series has documented, the AI fraud threat is not theoretical—it is present, accelerating, and systematically defeating legacy security measures.
The choice facing enterprise leaders is straightforward: evolve authentication now, implement systems designed for AI-era threats, or become another statistic in the growing tally of successful AI-powered attacks. Photolok’s patented steganography technology, combined with unique security features like Duress Photo and 1-Time Use Photo, offers a proven path forward—authentication that protects against the threats of 2026 and beyond.
Don’t wait for AI-powered fraudsters to target your organization. Discover how Photolok’s patented steganography and AI-resistant authentication can protect your enterprise while improving user experience.
Request Your Personalized Demo
Author: Kasey Cromer is Director of Customer Experience at Netlok.
[1] Predictions 2026: Cybersecurity and Risk — Forrester (October 2025)
[4] Detecting dangerous AI is essential in the deepfake era — World Economic Forum (July 2025)
[6] Deepfake Fraud Could Surge 162% in 2025 — Pindrop (July 2025)
[7] Whistleblower says DOGE put Social Security numbers at risk — NPR (August 2025)
[9] Whistleblower: DOGE Put Millions of Americans’ Data at Risk — TIME (August 2025)
[11] More companies are shifting workers to passwordless authentication — CNBC (November 2025)
[12] AI-driven cybercrime is growing, here’s how to stop it — World Economic Forum (January 2025)
[13] Deepfake Statistics & Trends 2025 — Keepnet Labs (November 2025)
[14] AI-powered fraud is exploding — Cybernews/Entrust (November 2025)
[15] Forrester: Agentic AI-Powered Breach Will Happen in 2026 — Infosecurity Magazine (October 2025)
Kasey Cromer, Netlok | November 21, 2025
Executive Summary
Global cybercrime is now a $10.5 trillion industry — larger than the GDP of every country except the US and China. AI-powered fraud has reached a critical tipping point, with enterprise banks reporting a 70% increase in fraud over the past year and deepfake incidents surging by 700% [1][2][3]. As fraudsters weaponize generative AI to create hyper-realistic deepfakes and sophisticated phishing campaigns, traditional authentication methods are failing catastrophically. This analysis examines how Netlok’s Photolok – using patented photo-based steganography with AI/ML defense capabilities – provides the most effective defense against AI-driven login identity fraud.
The AI Fraud Explosion: A $40 Billion Problem by 2027
The transformation of fraud through artificial intelligence represents one of the most significant security challenges facing enterprises today. According to Deloitte’s latest analysis, GenAI-driven fraud losses in the United States alone could exceed $40 billion by 2027, up from $12 billion in 2023 [4]. This 233% increase demonstrates the exponential threat posed by AI-enabled attacks.
The numbers tell a devastating story:
The Operational Nightmare: When AI Outpaces Human Detection
The surge in AI-powered attacks isn’t just a financial problem—it’s overwhelming fraud prevention teams across industries. A recent Sift report reveals that AI-driven scams rose by 456% between May 2024 and April 2025, with fraudsters crafting convincing scams up to 40% faster using AI tools [9]. This acceleration has overwhelmed traditional fraud prevention systems.
Real-World Devastation: The Deepfake Authentication Crisis
$25 Million Video Call Scam: A company’s finance team member paid out $25 million after participating in a video call where every participant except the victim was a deepfake, including the CFO who gave authorization for the transfer.[13]
Voice Cloning: 3,000% Surge: Banking institutions report deepfake voice attacks on customer service centers increased 3,000% year-over-year, with criminals needing just three seconds of audio to master someone’s voice.[14]
Synthetic Identity: $23B by 2030: AI creates fictitious identities by blending real and fake information, with projected losses of $23 billion by 2030.[15]
Why Traditional Authentication Is Obsolete
Current authentication methods were designed for a pre-AI world and are catastrophically failing against modern threats:
| Authentication Type | Documented Vulnerabilities | Sources |
| Passwords | 49% of all data breaches | [Spacelift] |
| SMS/Email OTP | SIM swapping attacks increased 400% | [7] |
| Voice Biometrics | Voice clones created from 3-second samples; 3,000% increase in attacks | [14] |
| Facial Recognition | Deepfakes increased 700%; bypass liveness detection | [2] |
The Photolok Advantage: Authentication Built for the AI Era
Photolok is a passwordless authentication solution using patented methods using steganographic photos. Rather than relying on passwords or biometrics, Photolok has users authenticate by selecting personal selected coded photos during login.
Key advantages of Photolok include:
The Window Is Closing
With AI fraud tools now accessible to anyone with an internet connection, the question isn’t whether your organization will be targeted—it’s when. The World Economic Forum warns that traditional verification methods are “no longer sufficient” against AI-enabled fraudsters [16].
The choice is stark: evolve authentication now or become another statistic in the $10.5 trillion cybercrime industry. Photolok’s patented steganography technology offers a proven path forward, combining AI/ML defense with operational efficiency and user satisfaction.
Take Action Against AI Fraud
Don’t wait for AI-powered fraudsters to target your organization. Discover how Photolok’s patented steganography and AI-resistant authentication can protect your enterprise while improving user experience.
Request Your Personalized Demo
Author: Kasey Cromer is Director of Customer Experience at Netlok.
Sources
Kasey Cromer, Netlok | November 14, 2025
Executive Summary
A single deepfake video call cost a multinational firm $25 million—and this is just the beginning. AI-driven deepfakes have exploded by over 1,000% since 2023, now fueling sophisticated attacks from executive impersonation to credential theft across every industry vertical.[1][4][22] With deepfake-as-a-service platforms offering custom attacks for under $100 and detection accuracy struggling at just 25%, enterprises face an unprecedented authentication crisis.[3][7] This guide demonstrates why Netlok’s Photolok – using patented steganography photos with AI/ML defense capabilities – offers the most robust defense against AI-powered login identity fraud.
The Deepfake Explosion: Enterprise Impact by the Numbers
The statistics paint a devastating picture of the current threat landscape:
| Threat | Current State | Business Impact |
| Deepfake attack volume | 1,000%+ increase since 2023 [1][4] | Exponential growth overwhelming security teams |
| Enterprise targeting | 73% of Fortune 500 companies attacked [22] | No organization too large to escape |
| Financial damage | $4.2M average loss per attack [2] | Direct bottom-line impact |
| Human detection capability | 75% failure rate [7] | Traditional security training ineffective |
| Executive impersonation | 89% of attacks target C-suite [1] | Enables unauthorized high-value transactions |
Real-World Deepfake Devastation
$25 Million Teams Deepfake: A European energy conglomerate lost $25M when attackers used a real-time deepfake video during a Teams call, perfectly mimicking the CFO to authorize wire transfers.[1][2]
State-Sponsored Infiltration: North Korean hackers used deepfake IDs and video interviews to infiltrate 67 tech companies as remote employees, establishing listening posts for espionage and IP theft.[1][3]
Banking Voice Attacks: 500% Surge: Major banks report AI-generated voices bypassing biometric systems in 31% of tests, with deepfake-enabled account takeovers increasing 500%.[4][20]
Deepfake sophistication has reached a critical threshold. These aren’t grainy videos anymore—they’re real-time, interactive deepfakes that fool seasoned security professionals. Traditional authentication is becoming obsolete as deepfake technology advances. [2][8]
Why Detection Fails: The Technology Arms Race
Modern deepfakes exploit psychological trust factors—familiar faces, expected contexts, and urgent scenarios—making technical detection secondary to social engineering success.[7][8] With deepfake-as-a-service platforms offering custom attacks for under $100, every employee becomes a potential target, overwhelming traditional security teams.[1][3]
Enterprise Defense Framework
Comprehensive Deepfake Defense Stack
| Security Function | Examples of Providers | How It Works | Business Value |
| Prevent Access | Photolok Authentication | Replaces passwords with AI-resistant photos | Stops deepfakes before they enter systems |
| Detect Threats | Reality Defender API | Scans all video/audio in real-time [8][17] | Catches sophisticated deepfakes others miss |
| Train Staff | Breacher.ai Simulations | Monthly deepfake detection drills [10][11] | Reduces social engineering success rates |
| Verify Requests | Direct Verification | Contact person directly through pre-verified method [2][15] | Prevents unauthorized financial transfers |
Examples of Deepfake Detection Training:
Note: These platforms are listed for informational purposes only and do not constitute an endorsement.
Why Passwords Failed—How Photolok Succeeds
Authentication Methods and Known Vulnerabilities:
| Method | Vulnerabilities | Success Rate | Sources |
| Passwords | Phishing, credential stuffing | 49% of breaches | [Spacelift] |
| SMS/Voice OTP | SIM swapping, voice cloning | 400% increase in attacks | [7], [14] |
| Biometrics | Deepfakes, spoofing | 31% bypass rate | [4][20] |
Photolok’s Revolutionary Approach:
The Path Forward
Deepfakes represent a fundamental shift in the threat landscape—rendering traditional authentication obsolete while democratizing sophisticated attacks.[1][3][4] Unfortunately, many organizations still rely on password-based authentication — an approach increasingly outmatched by AI-driven, deepfake-enabled attacks.[7][22] But those embracing photo-based authentication with patented steganography, continuous training, and proactive detection build resilience against even nation-state actors.[1][8] The choice is clear: evolve authentication now or become tomorrow’s breach headline.[4][25]
Ready to Protect Your Enterprise?
See how Photolok can defend your organization against deepfake attacks and AI-powered fraud. Our team will demonstrate how patented steganography and AI-resistant authentication can secure your most critical assets.
Schedule Your Photolok Demo Today
Author: Kasey Cromer is Director of Customer Experience at Netlok.
Resources
Kasey Cromer, Netlok | October 6, 2025
Executive Summary
2025 is setting new records for cyberattacks, with over 16 billion passwords exposed and more than half of data breaches involving personally identifiable information (PII). Given increased regulatory scrutiny, increasing penalties, customer-facing risks, combined with new methods to protect yourself, every digital service user should take proactive steps to protect themselves.[1][2][3]
1. Data Breach by the Numbers
Defining Personally identifiable information (PII): PII is any type of data that can be used to distinguish or trace an individual’s identity by itself or when combined with other information. This includes direct identifiers—like full names, Social Security numbers, passport information, or biometric data (e.g., fingerprints, facial scans), and indirect ones—such as date of birth, race, gender, or place of birth that when combined with other data, can reveal the identity of a person.[4][5][6] Sensitive PII includes information like financial details, medical records, driver license numbers, phone numbers and email addresses, making this data highly valuable to cybercriminals. Protecting PII is crucial to prevent identity theft and unauthorized use.
| Metrics for 2024 | Value | Source |
| Passwords exposed | 16 billion | [1] |
| Global cost per breach | $4.88M | [2] |
| U.S. cost per breach | $9.36M | [7] |
| Breaches exposing PII | 53% | [3] |
| Average cost per PII record | $173-$189 | [3] |
| Regulatory fines (32% of orgs) | $100,000+ | [8] |
| Breach Volume Trends 2021-2025 |
| Data Breaches by Year: |
| 2021: ████████████ 1,100 |
| 2022: ██████████████ 1,400 |
| 2023: ████████████████ 1,700 |
| 2024: █████████████████████ 2,100 |
| 2025 YTD: █████████████████████████ 2,500 |
2. Who Gets Hurt—and How?
Victims of recent breaches recount losing retirement savings, having mortgage applications denied, and enduring relentless phishing and fraud attacks. A Connecticut bank customer saw their information used to open credit cards. Another family faced insurance fraud after health data was leaked. The takeaway, even when attackers don’t steal money immediately, is that exposed personal information often causes financial, emotional, and reputational turmoil for years.[9][10]
“The shift we’re seeing in 2025 is from passive acceptance of breaches to active customer empowerment. New regulations, better insurance options, and innovative authentication technologies are giving consumers real tools to protect themselves—but only if they use them.”
— Industry perspective from leading cybersecurity analysts[2][3]
3. Salesforce as Case Study—But Risks Are Everywhere
The high-profile Salesforce breach, in 2025, impacted thousands of organizations, exposing credentials and customer data through a third-party integration. Yet these methods—phishing, stolen PII, exploiting software integrations—also enable attacks on hospitals, insurers, banks, universities, and government offices across the globe. Every digital user is potentially a target.[11][12][13]
| Attack Vectors by Industry (2025) |
| Industry Breakdown of Data Breaches: |
| Healthcare 35% ███████████████████████████████████ |
| Financial 28% ████████████████████████████ |
| Retail/E-comm 22% ██████████████████████ |
| Government 10% ██████████ |
| Other 5% █████ |
4. Regulation & Insurance: What Changed in 2025
Regulatory Breach Notice Deadlines—At a Glance
| State/Regulation | Deadline |
| NY, CA | Immediate |
| Oklahoma | 48 hours |
| HIPAA (all U.S. healthcare) | Up to 60 days |
5. Emotional & Financial Toll: Human Stories Matter
Exposed PII allows cybercriminals to send customized scam emails, create socially engineered support lines, and commit medical or financial fraud in victims’ names. Victims often spend months, sometimes years, repairing records, refuting fraudulent activity, and regaining lost access. For most simple cases, recovery is possible within weeks to a few months, but for a substantial minority, especially those involving government fraud or major financial harm, the process can extend for 1-2 years or longer. [18]
| Average Recovery Timeline After Breach |
| Timeline to Full Recovery: |
| Day 0 ▓ Breach Detection |
| Days 1-7 ▓▓▓ Notification Period |
| Days 7-30 ▓▓▓▓▓▓▓ Account Security Measures |
| Days 30-90 ▓▓▓▓▓▓▓▓▓▓▓▓▓ Credit Monitoring Setup |
| Months 3-24 ▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓ Full Recovery Process |
6. What Every Customer Should Do
Within 24 hours of breach notice:
Within 48 hours:
Week 1:
First month:
Ongoing:
7. Why Passwords Are the Problem—and Photolok Is the Solution
Traditional passwords remain the weakest link in cybersecurity, with 88% of web application attacks exploiting stolen credentials.[3] That’s why at Netlok, we’ve developed Photolok—a revolutionary visual authentication system that eliminates passwords entirely.
How Photolok Protects You:
Visual Authentication
Instead of typing passwords that can be stolen, you select encrypted photos from Photolok’s proprietary library and log in to your private account. Hackers can’t use what they can’t steal.
One-Time Use Photos
Each photo can be set for single use, expiring after login. Even if someone sees you authenticate, they can’t reuse that image.
Duress Protection
Select a special “duress photo” to silently alert authorities or trusted contacts if you’re forced to log in under threat—a feature no password can offer.
Easy Setup & Management
Built for Everyone
From tech-savvy professionals to seniors who struggle with passwords, Photolok’s intuitive design makes strong security accessible to all users.
Real-World Impact:
When the recent Salesforce breaches exposed consumer passwords, Photolok users remained protected. You can’t phish a photo that changes with each login.
Ready to move beyond passwords? Learn more about Photolok or Request a Demo to see how visual authentication can protect your accounts today.
8. The Path Forward
Data breaches aren’t slowing down—they’re accelerating. But customers don’t have to be victims. Through vigilance, advocacy, and adoption of advanced authentication solutions like Photolok, every user can take control of their digital security.
Author & Credentials
Kasey Cromer is Director of Customer Experience at Netlok, focused on authentication, incident response, and SaaS security for over a decade.
Resources
Published September 2025. Content reviewed quarterly for accuracy and compliance. Netlok’s Photolok solution is featured as an innovative approach to password-free authentication in the evolving cybersecurity landscape.
K. Cromer, Netlok 9/8/2025
This analysis builds on Netlok’s ongoing research into wrench attack vulnerabilities. For additional context, visit our blog resources.
The darkest prediction in cryptocurrency security has come true: As of August 2025, wrench attacks against crypto holders are averaging more than one incident per week worldwide, with 30+ documented cases in less than half a year¹. Bitcoin trades near $122,000—over 50% higher than a year ago—fueling a shift from sophisticated hacking to old-fashioned violence².
As crypto values hit historic highs and identities are exposed via massive data breaches, security experts warn of “a brutal convergence of the speed of cybercrime with the violence of street crime”³. Recent statistics confirm this threat has evolved from isolated events to systematic targeting, making distress resistant authentication more critical than ever.
The Numbers Tell a Chilling Story
| Threat Factor | Before 2025 | 2025+ Reality |
| Attack Frequency | 18 cases (2023), 24 cases (2024) | 30+ cases in less than half a year¹ |
| Geographic Spread | Mostly isolated in the US | Global: France, U.S., UK, Canada, Asia⁴ |
| Target Sophistication | Crypto-savvy users with strong digital security | Advanced users with cold wallets are equally vulnerable |
| Criminal Methods | Opportunistic robberies | Organized kidnappings, family targeting, weeks-long captivity⁵ |
| Price Correlation | Wrench attacks did not reliably increase with rising Bitcoin prices | Direct link to Bitcoin’s $122,000 highs² |
| Insurance Response | No specialized policies | Lloyd’s of London now offering wrench attack coverage⁶ |
Why Paris Became Ground Zero
France, particularly Paris, has emerged as the epicenter of crypto violence. In one prominent case, a crypto executive was kidnapped from his home, while others saw family members targeted in broad daylight⁷. Cases aren’t limited to continental Europe: the U.S., UK, Canada, and Asia have all reported wrench attacks in 2025⁴.
What began as isolated cases is now a global issue, with organized crime groups and opportunistic actors exploiting public profiles and personal data⁸.
Where Traditional Security Fails
“The brutal reality is that seemingly cryptographically perfect systems fail completely when someone puts a gun to your head”⁹.
Traditional multi-factor authentication, hardware wallets, and encryption offer no real protection against physical coercion.
Victims report beatings, electric shocks, and even prolonged captivity until attackers achieved transfers under force¹⁰. Research now shows that even highly security-conscious holders are not immune—meaning the threat transcends technical skill or digital hygiene¹¹.
This widening gap between digital protections and physical coercion is precisely where an alternative approach is needed.
The Photolok Advantage
Unlike traditional MFA methods that collapse under physical threats, Photolok introduces adaptive, attack response visual authentication designed to transform wrench attacks from complete vulnerability into opportunities for silent resistance¹².
Duress Signaling in Action
Consider a scenario: a user pre-selects a specific photo as a “duress” photo. If forced to authenticate, selecting this photo triggers a silent alarm to security contacts and law enforcement, while granting access to the attacker. This ensures that, even during a threat, victims can discreetly signal for help without escalating the situation¹³.
One-Time Use
Each photo is cryptographically unique. By selecting a one-time use photo, you avoid photo disclosure as a one-time photo expires after it is used one time. Even if attackers gain access, this specific photo cannot be reused—significantly limiting the attacker’s ability to login in the future¹³.
Cognitive Confusion
Photolok’s visual, point-and-click system is unfamiliar to most criminals who expect passwords or PINs. Attackers may struggle to articulate demands (“click on your photos” is less intuitive than “enter your password”), creating crucial delays and confusion¹².
Risk Reduction Tools
There are a number of actions that can be taken to reduce the risk of attack and minimize harmful outcomes.
From Vulnerability to Empowerment
2025’s weekly attack frequency marks a turning point in crypto security. For the first time, tools exist that change the outcomes of physical coercion, enabling individuals to silently signal for help and limit attackers’ ability to access their personal information under duress. With Photolok’s duress photo login, if someone forces a user to unlock crypto, selecting a special “duress photo” quietly alerts help without tipping off the attacker. Instead of feeling powerless, users get a way to protect their assets and ask for help, even in dangerous situations. The $5 wrench and threat of physical harm will always defeat pure encryption, but it doesn’t have to defeat human ingenuity.
Ready to enhance your security? Learn more about how Photolok can protect your assets at Netlok.com and explore our blog resources for deeper insights into duress-resistant authentication and the future of crypto security.
Sources
A.R. Perez, Netlok. 7/8/2025
Multi-factor authentication (MFA) was once hailed as a near-perfect shield, yet recent headline breaches prove attackers are not only slipping past it—they are doing so at an accelerating pace. This report ranks today’s most common MFA combinations from weakest to strongest and quantifies the sharp rise in MFA-related attacks between 2023 and 2025. It should be noted that PhotolokÒ (a passwordless MFA factor that uses proprietary-coded photos) is not included in this analysis.
Why MFA Strength Varies
Every MFA scheme marries at least two factors—knowledge (password/PIN), possession (token/phone), or inherence (biometric). Security depends on:
Ranking MFA Combinations
| Rank | Typical Combination | Core Weaknesses | Core Strengths | Verdict |
| 8 (Strongest) | Hardware passkey + on-device biometric (FIDO2/WebAuthn) | None of the factor data ever leaves the device; resistant to phishing and replay 1, 2 | Cryptographic challenge tied to hardware; biometric unlock 3 4 | Phishing-resistant, passwordless gold standard |
| 7 | Password + hardware security key (FIDO2/U2F) | Requires user to manage key inventory | Cryptographic possession factor blocks replay 5, 1 | Best “password-plus” model |
| 6 | Password + smart-card/PKI token (PIV/CAC) | Complex deployment & driver issues | Mutual certificate validation; device binding 2 | Enterprise-grade where supported |
| 5 | Password + platform biometric (e.g., Windows Hello, Face ID) | Biometric unlock is local; underlying session can be phished if fallback to password allowed 4 | User-friendly; device-tied secrets6 | Good for mainstream use but still password-dependent |
| 4 | Password + number-matching push or TOTP-hardware token | Phishable one-time codes; token theft possible7, 8 | Short validity window, no SMS channel | Mid-level protection |
| 3 | Password + generic authenticator-app TOTP (30-second code) | Real-time phishing proxies capture code 9 | No carrier reliance; easy rollout 7 | Better than SMS, still phishable |
| 2 | Password + push notification (“Approve/Deny”) | MFA-fatigue bombing & social-engineering approvals10, 11 | User convenience | Frequently bypassed by prompt bombing |
| 1 (Weakest) | Password + SMS/voice code | SIM-swap, SS7 intercept, no encryption 12, 13 | Universal availability | Should be phased out per CISA and NIST guidance 2, 14 |
Key Takeaways
The Surge in MFA-Focused Attacks (2023-2025)
| Year | Representative Study | Metric Reported | Indicator of MFA Attack Activity |
| 2023 | Okta “State of Secure Identity 2023” | 12.7% of all MFA attempts on Okta’s Customer Identity Cloud were outright bypass attacks 15 | Baseline showing bypass in production traffic |
| 2023 | Kroll “Rise in MFA Bypass” (Oct 2023) | 90% of BEC cases investigated had MFA in place when accounts were compromised 16 | Confirms attackers pivoting to MFA-enabled targets |
| 2024 | Cisco Talos IR Q1 2024 | ≈50% of incident-response cases involved failure or bypass of MFA controls 10, 17 | Doubling of bypass prevalence over 2023 baseline |
| 2024 | Proofpoint “State of the Phish 2024” | Phishing frameworks such as EvilProxy observed in ≈1 million threats per month, explicitly harvesting MFA cookies 18 | Commodity kits fueling large-scale bypass |
| 2025 | Netrix Global “New Wave of MFA Bypass Attacks” (Jun 2025) | Advises a “surge” but no percentage; corroborated by FRSecure IR 2024-25 where 79% of BEC victims had correctly implemented MFA yet were breached 19 | MFA bypass now dominant in BEC incidents |
| 2025 | eSentire Q1 2025 Report | BEC attacks (often MFA bypass via Tycoon 2FA) rose 60% YoY, now 41% of all attacks 20 | Attack volume and proportion at all-time high |
Visualizing the Climb
| Year | Reported MFA-Attack Rate* | Year-over-Year Change |
| 2023 | 12.7%–-90% depending on vertical (baseline) | — |
| 2024 | ≈50% of IR cases involve MFA bypass 10, 17 | +~35 pp from Okta baseline |
| 79% of BEC victims breached despite MFA 19 | +29 pp vs 2024 IR data |
*Rates come from different datasets (CIAM traffic, IR engagements, BEC breaches). While scopes vary, all show the same climbing trajectory.
Why the Rate Keeps Rising
Commodity Phishing-as-a-Service (PhaaS)
Token Theft & Session Hijacking
MFA Fatigue & Social Engineering
Weak Factor Mix
Hardening the Human-Machine Perimeter
1. Phase Out Legacy Factors
2. Enforce Phishing-Resistant MFA
3. Strengthen Push Workflows
4. Layer Conditional Access & Risk-Based Controls
5. Educate to Eradicate MFA Fatigue
Conclusion
Attackers’ ability to sidestep MFA has grown from isolated exploits in 2023 to industrial-scale commodity services in 2025. Organizations that cling to password-plus-SMS or push-only MFA now occupy the bottom rung of the strength ladder and face a sharply rising threat curve. Yet the solution is within reach: broad adoption of phishing-resistant, device-bound authentication—coupled with risk-aware access controls—flips the cost curve back onto the attacker. Upgrade the factors, shrink the attack surface, and keep users from approving the next rogue prompt. One novel method of upgrading factors is to use Photolok – a passwordless factor that uses steganographic coded photos that also protects against AI/ML attacks as well as provides lateral movement penetrations due to its unique architecture.