Kasey Cromer, Netlok | January 27, 2026
Autonomous AI agents are now executing code, authorizing payments, and modifying systems across enterprise environments. According to PwC’s 2025 AI Agent Survey, 79% of organizations are already adopting AI agents, with Gartner predicting that 40% of enterprise applications will feature task-specific AI agents by the end of 2026 (up from less than 5% in 2025). The transformation is happening at unprecedented speed, and most organizations are deploying these systems without the governance frameworks needed to prevent systemic failures, regulatory violations, board-level accountability crises, and missed ROI targets. MIT research found that 95% of enterprise generative AI projects fail to deliver measurable financial returns, often because of inadequate governance and poor data foundations.
Here’s what security leaders need to know:
1) OWASP released its first Top 10 for Agentic Applications in December 2025, identifying critical risks from goal hijacking to rogue agents that security teams must address immediately.
2) Forrester predicts that agentic AI will cause a major public breach in 2026, with consequences severe enough to result in employee dismissals.
3) The identity layer is the critical control point, and Photolok’s patented photo-based authentication addresses this gap directly, replacing vulnerable credentials with dynamic, steganography-powered verification that AI cannot predict, harvest, or replay.
The Numbers That Define 2026
| Metric | Finding | Source |
| Companies with AI agents in production | 57% of enterprises surveyed | G2 August 2025 |
| Practitioners citing security as top AI agent challenge | 62% of AI practitioners surveyed | Warmly Research 2025 |
| Vulnerable agent framework components identified | 43 distinct components compromised via supply chain | Stellar Cyber 2025 |
What Makes Agentic AI Different (and Dangerous)
If you’ve been following AI developments, you might think this is just another incremental step. It’s not. The shift from generative AI to agentic AI represents a categorical change in risk.
Here’s the difference: A standard large language model generates content like text or code. An agentic AI takes that several steps further. It uses tools, makes decisions, and performs multi-step tasks autonomously in digital or physical environments. It doesn’t just talk; it does.
Think about what that means in practice. As an example, an AI agent handling procurement can autonomously negotiate with suppliers, issue purchase orders, and authorize payments. A customer service agent can access customer records, modify accounts, and execute transactions. A security operations agent can respond to alerts, quarantine systems, and modify access controls.
When software can make decisions and act on its own, security strategies must shift from static policy enforcement to real-time behavioral governance. The OWASP GenAI Security Project put it directly: “Once AI began taking actions, the nature of security changed forever.”
The OWASP Top 10 for Agentic Applications: Your New Security Framework
In December 2025, OWASP released the first comprehensive security framework specifically designed for autonomous AI systems. This framework names the ten most critical security risks for autonomous/agentic AI systems and gives high-level guidance to mitigate them. It is meant as the “field manual” for securing Al agents that can plan, act, use tools, and make decisions across workflows, similar in spirit to the classic OWASP Top 10 for web apps but focused on agentic Al. Developed with input from over 100 security researchers and providers including AWS and Microsoft, the Top 10 for Agentic Applications was built from real incidents – confirmed cases of data exfiltration, remote code execution, memory poisoning, and supply chain compromise.
The 10 risks (at a glance)
The Shadow Agent Problem Nobody Wants to Talk About
Remember shadow IT? We’re seeing the exact same pattern with AI agents now, except the stakes are exponentially higher.
According to Omdia research, while many enterprises deploy AI agents within controlled environments like Salesforce Agentforce, the real value comes from touching core applications and processes. That’s also where significant cyber risk lives. Employees are connecting AI tools to company systems without IT oversight, development teams use AI coding assistants with broad repository access, and business units deploy automation agents with excessive privileges. Each of these creates unvetted identity providers and data paths that exist entirely outside normal IAM controls.
The Barracuda Security report from November 2025 identified 43 agent framework components with embedded vulnerabilities via supply chain compromise. Researchers have already discovered malicious MCP (Model Context Protocol) servers in the wild. MCP servers are essentially plug-in tools that extend what agents can do, and a compromised one gives attackers direct access to agent capabilities. One malicious package impersonated a legitimate email service but secretly forwarded every message to an attacker. Another contained dual reverse shells. Any AI agent using these tools was unknowingly exfiltrating data or providing remote access.
Why Traditional Authentication Fails the Agentic AI Challenge
Every security incident I’ve described, whether its goal is hijacking, privilege escalation, or rogue agents, eventually comes down to identity and access. Traditional authentication wasn’t designed for a world where autonomous systems need verified identities, where the line between human and machine actors blurs, and where attackers use AI to generate convincing impersonations at machine speed.
For internal use (protecting team members): How do you ensure the person authorizing an agent’s action is actually who they claim to be? How do you detect coercion? For example, traditional passwords are vulnerable to phishing, and AI now generates sophisticated social engineering attacks.
For customer-facing applications: When AI agents handle customer interactions, how do you verify identity without friction? Biometrics face growing threats from deepfakes that convincingly impersonate real people.
For agent-to-system authentication: As Salesforce’s Model Containment Policy emphasizes, AI models must be granted only the minimum necessary capabilities. Enforcing this requires robust authentication at every access point, something static credentials cannot provide.
What is the solution?
This is why we built Photolok at Netlok. Passwords and static one-time codes were designed for human logins in a pre-cloud, pre-agentic AI world, not for autonomous systems making thousands of decisions at machine speed. Photolok’s patented photo-based authentication uses steganography-coded images that randomize every session, creating a verification method designed for a world where attackers use AI to harvest, guess, and replay credentials at machine speed.
For internal teams: Photolok’s intuitive selection process, where users recognize and select their login photos, cannot be replicated by AI or automated systems. When an employee authorizes a sensitive agent action, you have high confidence it’s actually them.
The Duress Photo feature addresses scenarios after security tools are ignored. If someone is being coerced into approving an agent’s action, whether through social engineering, physical threat, or insider pressure, they can select a designated photo that grants access but silently alerts security. This applies not just to physical coercion but also to high-stakes financial approvals, privileged access changes, and any agent-executed transaction where verification matters. In an era where AI agents execute transactions in milliseconds, this silent alarm could prevent catastrophic damage.
Against AI-powered attacks: Because Photolok’s login process uses dynamic photo randomization and embedded steganographic codes, AI/ML tools have minimal attack surface.
Learning from Enterprise AI Governance: The Salesforce Model
Salesforce’s approach to agentic AI security, documented in their Model Containment Policy and AI Acceptable Use Policy, provides a blueprint every organization should study. Their core principle: “The model reasons; the platform decides.” LLMs provide language intelligence, but configuration provides authority, safety, and accountability.
Key governance requirements:
No Autonomous Authority: AI models may recommend, summarize, classify, or assist, but must not make final decisions with legal, financial, safety, or rights-impacting consequences. Final authority must reside with a human or deterministic system.
Deterministic Control Over Probabilistic Behavior: Critical behaviors must be enforced outside the model. Routing, permissions, approvals, and enforcement must not rely on model judgment. Prompts may guide behavior but must never be the sole enforcement mechanism.
Human-in-the-Loop Requirements: Human review is mandatory when AI output affects individuals’ rights, is used in regulated domains, is externally published, or supports high-risk decisions.
No Self-Expansion: AI models must not modify their own instructions, permissions, or scope; generate or deploy new tools; escalate privileges; or chain actions indefinitely without external limits.
These aren’t optional guidelines. They’re foundational requirements for safe AI agent deployment.
The Real Cost of Waiting
The organizations deploying AI agents today face a choice: implement proper governance now, or scramble to explain failures later.
Omdia analyst Todd Thiemann’s prediction for 2026 is blunt: “Some early AI agent deployments will get pushed into production with inadequate QA testing, insufficient security guardrails, or an over-permissioned agent, and we will start to see mischief involving AI agents. I expect 2026 will see AI agents touching core business processes, and some high-profile data breaches and fraud originating from those AI agents.”
Forrester’s prediction is even more direct: Agentic AI will cause a major public breach in 2026 that will lead to employee dismissals. When that breach happens, expect board investigations, regulatory scrutiny over data protection and financial controls, and serious questions about executive accountability. The fallout won’t be limited to IT departments.
The question isn’t whether your organization will face these risks. It’s whether you’ll be ready when they arrive.
What Security Leaders Should Do Now
1. Audit your AI agent exposure. Find out what agents your organization is actually using, what systems they can access, and what actions they can take. You can’t secure what you don’t know about.
2. Implement the principle of least agency. Every AI agent should have minimum autonomy required. Review and restrict agent permissions aggressively. Require human approval before agents can execute financial transactions, modify access controls, delete data, or take any action that cannot be easily reversed.
3. Establish deterministic controls for critical decisions. Don’t rely on prompts (the natural language instructions that tell AI agents what to do and how to behave) for security enforcement. Build guardrails into your architecture that cannot be bypassed through prompt manipulation.
4. Rethink authentication for the agentic era. Evaluate modern alternatives like Photolok that resist AI-powered attacks and provide verification that autonomous systems cannot fake.
5. Build strong observability. Implement comprehensive logging of agent actions. Monitor for behavioral anomalies. Create kill switches for rapid disabling.
6. Brief your leadership. AI agent security is a governance issue. Ensure your board of directors understands the stakes before an incident forces that conversation.
The Bottom Line
The enterprise AI landscape in 2026 is moving faster than most security frameworks can adapt. AI agents are no longer experiments. They’re production systems with real permissions and real consequences. The organizations that act now to implement proper governance, authentication, and observability will be the ones that capture the value of agentic AI without becoming the next cautionary tale. This will also provide protection of financial results and preserve ROI.
The OWASP Agentic Top 10 gives us a framework. Enterprise policies like Salesforce’s provide governance blueprints. Modern authentication like Photolok addresses the identity challenges that traditional methods cannot solve.
The tools exist. The question is whether your organization will use them before or after the breach.
Photolok Offer
Want to see how Photolok can help secure your organization’s AI-powered future? Request Your Personalized Demo Today.
About the Author
Kasey Cromer is Director of Customer Experience at Netlok.
Sources
[1] PwC. “AI Agent Survey.” May 2025. https://www.pwc.com/us/en/tech-effect/ai-analytics/ai-agent-survey.html
[2] Gartner. “Gartner Predicts 40% of Enterprise Apps Will Feature Task-Specific AI Agents by 2026.” August 2025. https://www.gartner.com/en/newsroom/press-releases/2025-08-26
[3] OWASP GenAI Security Project. “OWASP Top 10 for Agentic Applications for 2026.” December 2025. https://genai.owasp.org/resource/owasp-top-10-for-agentic-applications-for-2026/
[4] Forrester. “Predictions 2026: Cybersecurity and Risk.” October 2025. https://www.forrester.com/blogs/predictions-2026-cybersecurity-and-risk/
[5] Stellar Cyber. “Top Agentic AI Security Threats in 2026.” December 2025. https://stellarcyber.ai/learn/agentic-ai-securiry-threats/
[6] G2. “Enterprise AI Agents Report: Industry Outlook for 2026.” December 2025. https://learn.g2.com/enterprise-ai-agents-report
[7] CyberArk. “AI Agents and Identity Risks: How Security Will Shift in 2026.” December 2025. https://www.cyberark.com/resources/blog/ai-agents-and-identity-risks-how-security-will-shift-in-2026
[8] BleepingComputer. “The Real-World Attacks Behind OWASP Agentic AI Top 10.” January 2026. https://www.bleepingcomputer.com/news/security/the-real-world-attacks-behind-owasp-agentic-ai-top-10/
[9] Omdia/Dark Reading. “Identity Security 2026: Predictions and Recommendations.” January 2026. https://www.darkreading.com/identity-access-management-security/identity-security-2026-predictions-and-recommendations
[10] SecurityWeek. “Rethinking Security for Agentic AI.” January 2026. https://www.securityweek.com/rethinking-security-for-agentic-ai/
[11] Cloud Security Alliance. “Top 10 Predictions for Agentic AI in 2026.” January 2026. https://cloudsecurityalliance.org/blog/2026/01/16/my-top-10-predictions-for-agentic-ai-in-2026
[12] McKinsey. “The State of AI in 2025: Agents, Innovation, and Transformation.” November 2025. https://www.mckinsey.com/capabilities/quantumblack/our-insights/the-state-of-ai
[13] Salesforce. “Artificial Intelligence Acceptable Use Policy.” December 2025. https://www.salesforce.com/company/legal/agreements/
[14] Salesforce. “Model Containment Policy.” January 2026.
[15] Netlok. “How Photolok Works.” 2025. https://netlok.com/how-it-works/
[16] MIT/CIO. “2026: The Year AI ROI Gets Real.” January 2026. https://www.cio.com/article/4114010/2026-the-year-ai-roi-gets-real.html
[17] Astrix Security. “The OWASP Agentic Top 10 Just Dropped: Here’s What You Need to Know.” December 2025. https://astrix.security/learn/blog/the-owasp-agentic-top-10-just-dropped-heres-what-you-need-to-know/
[18] Giskard. “OWASP Top 10 for Agentic Applications 2026: Security Guide.” December 2025. https://www.giskard.ai/knowledge/owasp-top-10-for-agentic-application-2026
[19] Palo Alto Networks. “OWASP Top 10 for Agentic Applications 2026 Is Here.” December 2025. https://www.paloaltonetworks.com/blog/cloud-security/owasp-agentic-ai-security/
[20] ActiveFence. “OWASP Top 10 for Agentic AI.” December 2025. https://www.activefence.com/blog/owasp-top-10-agentic-ai/
Agentic AI in the Enterprise: The Security Guide Nobody Gave You
Kasey Cromer, Netlok | January 27, 2026 Executive Summary Autonomous AI agents are now executing cod[...more]
Workplace Security in 2026: When AI, Insiders, and Remote Work Collide
Kasey Cromer, Netlok | January 15, 2026 Executive Summary The uncomfortable truth about workplace se[...more]
How Insider Threats Bypass Security: Why Traditional Authentication Fails in the AI Era
Kasey Cromer, Netlok | January 5, 2026 Executive Summary Insider threats now cost an average of $17.[...more]
Authentication at a Crossroads: Preparing for the AI-Powered Threat Landscape of 2026 and Beyond
Kasey Cromer, Netlok | December 4, 2025 Series Recap Part 1 (November 14, 2025) took a deeper dive i[...more]
The $40 Billion Crisis: How AI-Powered Fraud Is Overwhelming Enterprise Security Teams
Kasey Cromer, Netlok | November 21, 2025 Executive Summary Global cybercrime is now a $10.5 trillion[...more]
AI Deepfakes: Enterprise Security Crisis Demanding New Authentication
Kasey Cromer, Netlok | November 14, 2025 Executive Summary A single deepfake video call cost a multi[...more]
Your Personal Data Was Just Stolen: Here’s Your 24-Hour Response Plan
Kasey Cromer, Netlok | October 6, 2025 Executive Summary 2025 is setting new records for cyberattack[...more]
Wrench attacks average more than 1 incident per week in 2025
K. Cromer, Netlok 9/8/2025 This analysis builds on Netlok’s ongoing research into wrench attac[...more]
Measuring MFA’s Defensive Muscle in 2025
A.R. Perez, Netlok. 7/8/2025 Multi-factor authentication (MFA) was once hailed as a near-perfect shi[...more]