Post Thumbnail

When AI Becomes the Con Artist 

Kasey Cromer, Netlok | February 12, 2026

Executive Summary 

Social engineering has always exploited human psychology. In 2026, attackers have a new partner: artificial intelligence. AI-generated phishing campaigns now achieve success rates roughly four to five times higher than traditional attacks. Deepfake voice cloning requires as little as three seconds of audio. And purpose-built criminal tools can generate thousands of hyper-personalized attack messages in seconds. 

Here’s what security leaders need to understand: 

1) Social engineering is now cited as a leading cyber threat for 2026, with sharp year-over-year increases anticipated in the number of attempts, AI-generated campaigns, and business email compromise (BEC) losses, and 94% of businesses experienced at least one social engineering incident in 2025 

2) Attackers have moved beyond email to orchestrate multi-channel campaigns combining phishing, vishing, SMS, and deepfake video across platforms like Slack, Teams, and WhatsApp 

3) Traditional MFA is failing. The identity layer has become the primary battleground, and Photolok’s patented photo-based authentication addresses this gap directly with verification that AI cannot predict, clone, or replay 

Key Social Engineering Metrics  

Metric Finding Source 
Average cost of a phishing-driven breach $4.88M per incident IBM/Huntress 2025 
SMS phishing (smishing) vs. email phishing effectiveness 19-36% click rate vs. 2-4% for email (up to 9x more effective) Spacelift 2026 
Cloud breaches starting with compromised credentials 46% of cloud breaches begin with stolen credentials, often obtained via social engineering CompareCheapSSL 2025 
Training effectiveness (sustained vs. one-time) One-time training reduces susceptibility by only 8%; continuous training improves effectiveness to 23% CompareCheapSSL 2025 
Third-party involvement in breaches 30% of breaches now involve third parties Verizon DBIR 2025 
Small business survival rate post-breach 60% shut down within 6 months after a major breach Huntress 2025 

The Five Social Engineering Trends Reshaping 2026 

1. AI-Powered Hyper-Personalization at Scale 

The old advice about spotting phishing (“look for spelling mistakes,” “check the tone”) is obsolete. Modern large language models produce grammatically flawless, contextually accurate messages that mirror your organization’s communication style. 

According to SecurityWeek’s Cyber Insights 2026 report, attackers now use AI to scrape social media activity, job roles, company updates, and even earnings calls to generate messages that feel authentic. The result? Phishing emails that reference your recent product launch, congratulate you on a promotion, or follow up on a project you discussed publicly. 

HYPR CEO Bojan Simic described the shift directly: “What once targeted human error now leverages AI to automate deception at scale. Deepfakes, synthetic backstories, and real-time voice or video manipulation are no longer theoretical; they are active, sophisticated threats designed to bypass traditional defenses and exploit trust gaps.” 

What makes this particularly dangerous is scale. Attackers can now launch hyper-personalized campaigns at mass phishing volume. The economics have shifted decisively in attackers’ favor. 

2. Deepfakes Move from Headlines to Standard Playbook 

Deepfakes are no longer fringe tools. They’re now a scalable part of social engineering campaigns, woven across entire attack chains rather than used as isolated tricks. 

The numbers tell the story: Gartner predicts that by the end of 2026, 30% of enterprises will no longer consider standalone identity verification and authentication solutions reliable in isolation. This shift reflects a stark reality: deepfake attacks bypassing biometric authentication increased 704% in 2023, and the deepfake technology has only improved since. 

Real attacks are already causing real damage. In one widely cited case, a finance employee authorized a transfer of roughly $25 million after joining what they believed was a legitimate video call with their CFO. Both the likeness and voice were deepfaked. In early 2026, X-PHY CEO Camellia Chan stated that “deepfakes will become the default social engineering tool by year-end 2026.” 

The barrier to entry has collapsed. Attackers now use voice cloning in phone calls with real-time synthesis that replicates an executive’s tone, cadence, and vocal signature. Short-form deepfake videos (15-30 seconds) are being embedded in WhatsApp messages and Slack channels, appearing as urgent updates from leadership. 

3. Vishing and Help Desk Attacks Surge 

Voice phishing (vishing) has transformed with AI voice-cloning tools. In multiple industries, vishing has replaced traditional phishing as the top social engineering threat. 

In January 2026, Okta’s threat researchers warned about custom vishing phishing kits being sold on dark web forums. These kits allow attackers to control authentication flows in real-time while on the phone with victims. The attacker creates a custom phishing page, spoofs a phone number to impersonate IT help desk, and convinces targets to visit the page under pretexts like “setting up a passkey” or “verifying account security.” 

The ShinyHunters cyber extortion syndicate has already claimed access to major companies through exactly this technique: vishing Okta SSO credentials. Help desk staff become the weak link when they relax verification procedures to accommodate callers who sound panicked. 

These attacks succeed because caller ID is easily spoofed and still treated as partial proof of identity. High-impact actions like resetting MFA or granting access to sensitive tools go through without verification through a separate, trusted channel. 
 
Defenses include requiring callback verification to a known number (not one provided by the caller), implementing code-based verification where the help desk provides a code the caller must retrieve from their authenticated account, and training staff that urgency is itself a red flag. 

4. ClickFix: The Attack That Makes You Infect Yourself 

A concerning trend has emerged rapidly through 2025 and into 2026: ClickFix attacks. These campaigns use fake CAPTCHA prompts or browser error messages to trick users into running malicious commands on their own computers. 

The attack is deceptively simple. You land on a webpage showing what looks like a CAPTCHA (“Verify you are human”) or a browser error (“Update required”). The prompt tells you to press Windows+R to open the Run dialog. You’re then instructed to paste (Ctrl+V) and press Enter. What you don’t realize is that malicious code was silently copied to your clipboard when you clicked the fake prompt. 

MicrosoftSentinelOne, and Proofpoint have all documented active ClickFix campaigns. The technique has been adopted by nation-state actors including Kimsuky (North Korea), MuddyWater (Iran), and APT28 (Russia). Criminal groups use it to deliver infostealers like Lumma Stealer and remote access trojans. 

ClickFix works because it exploits user fatigue with anti-spam mechanisms and bypasses conventional security tools. The user executes the malware themselves, so there’s no exploit to detect. 

5. Agentic AI and Multi-Channel Coordinated Attacks 

Social engineering no longer arrives through a single channel, and it’s no longer manually orchestrated. Agentic AI is turning social engineering into an end-to-end automated operation, from reconnaissance to outreach to post-compromise lateral movement. 

Forrester predicts that chains of specialized AI agents are emerging: some focus on reconnaissance, others craft lures, others manage infrastructure, together enabling mostly autonomous social engineering operations. Attackers now orchestrate campaigns across email, phone, SMS, and collaboration platforms simultaneously. 

A common flow: an email warning about suspicious activity, followed by a vishing call to “confirm your details.” Or a convincing voice message backed up by a phishing link via SMS. If the target ignores one channel, the attacker pivots to another. 

Cloud Range’s 2026 analysis found that attackers combine real user data from breaches, AI-generated personas, and automated messaging systems to deceive employees and consumers at scale. Detection and response must focus on interaction patterns, not single events. 

Why Traditional MFA Is Failing 

Here’s the uncomfortable truth: traditional multi-factor authentication is increasingly being defeated. 

The custom vishing kits documented by Okta in January 2026 can intercept SMS or voice one-time passwords, push-based MFA, and app-based time-based one-time passwords. Because attackers can control the pages shown to targets and synchronize them with spoken instructions, they defeat MFA not resistant to phishing attacks. 

The research firm Xcitium reported a 45% year-over-year rise in 2FA phishing attacks in 2025, with global damages recorded at $1.2 billion, noting that over 70% of targeted corporate attacks now involve some form of 2FA bypass. 

Phishing-resistant MFA options like FIDO2/WebAuthn security keys, passkeys, and certificate-based authentication offer stronger protection. But most organizations haven’t deployed them broadly, leaving employees vulnerable to attacks that bypass what they believe is strong authentication. 

Why Photolok Addresses the 2026 Threat Landscape 

Every attack we’ve described exploits the same fundamental weakness: authentication systems that can be observed, intercepted, or replicated. Passwords can be phished. SMS codes can be intercepted. Push notifications can be socially engineered. Even biometrics face growing threats from deepfakes. 

This is why we built Photolok at Netlok. Photolok’s patented photo-based authentication uses steganographic-coded images that randomize every session. The authentication process relies on cognitive recognition, where users select from photos only they would recognize, creating a verification method that randomizes each session, so even if observed, the selection cannot be reused by an attacker, intercepted in transit like an SMS code, or deepfaked like biometric data. 

Against AI-powered social engineering: Because Photolok’s login process uses dynamic photo randomization and embedded steganographic codes, AI and machine learning tools have no pattern to learn and no credential to harvest. 

Against vishing and help desk attacks: When an employee is pressured to ‘verify their identity’ over the phone, Photolok’s visual selection process resists transfer to an attacker. Even if the user describes their photo, the attacker must identify it from a randomized set of images, making accurate selection more difficult. There’s nothing to read aloud, nothing to type into a fake portal. 

Against coercion scenarios: The Duress Photo feature addresses what happens when social engineering succeeds at the human level. If someone is being coerced into authenticating, whether through manipulation, insider pressure, or threats, they can select a designated photo that grants access but silently alerts security. In an era where AI makes social engineering more convincing than ever, this silent alarm provides a critical safety net. 

What Security Leaders Should Do Now 

1. Assume your employees will be targeted with AI-enhanced attacks. Training that focuses on spelling errors and generic greetings is obsolete. Update awareness programs to address AI-generated content, deepfake audio and video, and multi-channel attack sequences. 

2. Deploy phishing-resistant authentication. Traditional MFA is no longer sufficient for high-risk roles and sensitive systems. Evaluate solutions like Photolok that resist the specific attack vectors dominating 2026: credential interception, real-time phishing proxies, and AI-powered impersonation. 

3. Harden help desk and support workflows. Require callback verification to a pre-verified contact number for high-impact actions like MFA resets, password changes, and access grants. Caller ID and callback numbers should never be treated as proof of identity. 

4. Implement detection for multi-channel attack patterns. Single-channel monitoring misses coordinated campaigns. Security operations should correlate suspicious activity across email, voice, SMS, and collaboration platforms. 

5. Establish verification protocols for financial transactions. Any request involving wire transfers, payment changes, or sensitive data should require confirmation through a channel the attacker cannot control. 

6. Brief leadership on the AI-enhanced threat landscape. Social engineering losses are a board-level issue. Ensure executives understand that the attacks of 2026 look nothing like the phishing emails they remember. 

The Bottom Line 

Social engineering has always been about exploiting human trust. In 2026, AI has made that exploitation faster, more convincing, and infinitely scalable. Attackers can clone voices from seconds of audio, generate thousands of personalized attack messages instantly, and orchestrate multi-channel campaigns that adapt in real time. 

These attacks show up as unplanned losses, regulatory scrutiny, and board-level questions about why known identity weaknesses were not addressed sooner. They are not just IT incidents; they are enterprise risk events. 

The organizations that will avoid becoming the next cautionary tale are those investing in authentication that cannot be socially engineered: systems where there’s nothing to intercept, nothing to replay, and nothing an AI can learn to predict. 

Photolok addresses this reality directly. When the attack exploits human psychology, the defense must go beyond human vigilance. 

The tools exist. The question is whether your organization will deploy them before or after the breach. 

Want to see how Photolok can help secure your organization against AI-powered social engineering? 

Request Your Personalized Demo 

About the Author 

Kasey Cromer is Director of Customer Experience at Netlok. 

Sources 

[1] ZeroFox Intelligence. “2026 Cyber Threat Predictions and Recommendations.” December 2025. https://www.zerofox.com/blog/2026-cyber-threat-predictions/ 

[2] SecurityWeek. “Cyber Insights 2026: Social Engineering.” January 2026. https://www.securityweek.com/cyber-insights-2026-social-engineering/ 

[3] Cloud Range. “5 Key Social Engineering Trends in 2026.” January 2026. https://www.cloudrangecyber.com/news/5-key-social-engineering-trends-in-2026 

[4] Hoxhunt. “Vishing Attacks Surge 442%.” December 2025. https://hoxhunt.com/blog/vishing-attacks 

[5] Help Net Security. “Okta Users Under Attack: Modern Phishing Kits Are Turbocharging Vishing Attacks.” January 2026. https://www.helpnetsecurity.com/2026/01/23/okta-vishing-adaptable-phishing-kits/ 

[6] BetaNews. “AI as a Target, Web-Based Attacks and Deepfakes: Cybersecurity Predictions for 2026.” January 2026. https://betanews.com/2025/12/22/ai-as-a-target-web-based-attacks-and-deepfakes-cybersecurity-predictions-for-2026/ 

[7] Keepnet Labs. “250+ Phishing Statistics and Trends You Must Know in 2026.” January 2026. https://keepnetlabs.com/blog/top-phishing-statistics-and-trends-you-must-know 

[8] Keepnet Labs. “Deepfake Statistics and Trends 2025.” November 2025. https://keepnetlabs.com/blog/deepfake-statistics-and-trends 

[9] DeepStrike. “Deepfake Statistics 2025: AI Fraud Data and Trends.” September 2025. https://deepstrike.io/blog/deepfake-statistics-2025 

[10] Hoxhunt. “Business Email Compromise Statistics 2026.” January 2026. https://hoxhunt.com/blog/business-email-compromise-statistics 

[11] FBI IC3. “Business Email Compromise: The $55 Billion Scam.” September 2024. https://www.ic3.gov/PSA/2024/PSA240911 

[12] Abnormal AI. “Threat Report: BEC and VEC Attacks Show No Signs of Slowing.” November 2025. https://abnormal.ai/blog/bec-vec-attacks 

[13] Microsoft Security Blog. “Think Before You Click(Fix): Analyzing the ClickFix Social Engineering Technique.” August 2025. https://www.microsoft.com/en-us/security/blog/2025/08/21/think-before-you-clickfix-analyzing-the-clickfix-social-engineering-technique/ 

[14] Proofpoint. “ClickFix Social Engineering Technique Floods Threat Landscape.” February 2025. https://www.proofpoint.com/us/blog/threat-insight/security-brief-clickfix-social-engineering-technique-floods-threat-landscape 

[15] SentinelOne. “Caught in the CAPTCHA: How ClickFix Is Weaponizing Verification Fatigue.” May 2025. https://www.sentinelone.com/blog/how-clickfix-is-weaponizing-verification-fatigue-to-deliver-rats-infostealers/ 

[16] Xcitium Threat Labs. “Unmasking Sneaky 2FA: How Modern Phishing Kits Bypass MFA in 2026.” January 2026. https://threatlabsnews.xcitium.com/blog/unmasking-sneaky-2fa-how-modern-phishing-kits-bypass-mfa-in-2026/ 

[17] Jericho Security. “Voice Phishing Is Rising: Why ‘Just a Phone Call’ Is Now a Real Threat.” February 2026. https://www.jerichosecurity.com/blog/voice-phishing-vishing-prevention 

[18] Netlok. “How Photolok Works.” 2025. https://netlok.com/how-it-works/ 

[19] Spacelift. “Social Engineering Statistics.” 2025. https://spacelift.io/blog/social-engineering-statistics 

[20] Forrester. “Predictions 2026: Cybersecurity and Risk.” October 2025. https://www.forrester.com/blogs/predictions-2026-cybersecurity-and-risk/ 

[21] Huntress. “Impact of Social Engineering: Key Statistics on Businesses.” 2025. https://www.huntress.com/social-engineering-guide/impact-of-social-engineering-key-statistics-on-businesses 

[22] CompareCheapSSL. “100+ Social Engineering Statistics in 2025.” December 2025. https://comparecheapssl.com/100-social-engineering-statistics-in-2025-the-latest-stats-and-trends-revealed 

[23] Keepnet Labs. “Security Awareness Training Statistics.” January 2026. https://keepnetlabs.com/blog/security-awareness-training-statistics 

More Articles