Ask Runable forDesign-Driven General AI AgentTry Runable For Free
Runable
Back to Blog
Technology7 min read

Why enterprises need governance frameworks for agentic AI | TechRadar

AI agents are making decisions for your business. That's why we need a new model of accountability for them. Discover insights about why enterprises need govern

TechnologyInnovationBest PracticesGuideTutorial
Why enterprises need governance frameworks for agentic AI | TechRadar
Listen to Article
0:00
0:00
0:00

Why enterprises need governance frameworks for agentic AI | Tech Radar

Overview

News, deals, reviews, guides and more on the newest computing gadgets

Start exploring exclusive deals, expert advice and more

Details

Unlock and manage exclusive Techradar member rewards.

Why enterprises need governance frameworks for agentic AI

When you purchase through links on our site, we may earn an affiliate commission. Here’s how it works.

Unlock instant access to exclusive member features.

Get full access to premium articles, exclusive features and a growing list of member rewards.

Enterprise productivity tools are entering a new phase. Instead of simply automating predefined workflows, platforms like Microsoft’s emerging Copilot Cowork concept promise something far more ambitious: AI agents capable of executing complex, multi-step tasks across tools such as Microsoft 365.

These systems represent a shift from automation to delegation. Instead of defining every step of a process, employees describe an outcome and the agent determines how to achieve it — sending emails, updating documents, adjusting permissions, or coordinating across applications.

The leadership dilemma: Governing the “Agentic AI” workforce

Why Agentic AI demands business process re-engineering

Enterprise AI governance cannot live in a prompt. So where is the safety net?

For enterprise security and governance teams, agentic AI raises a fundamental question: what happens when the system making operational decisions isn’t a human or even a traditional piece of software, but an autonomous agent acting on a human’s behalf?

Many agent-based systems attempt to mitigate risk with a “human in the loop” approach. When the AI reaches a decision point, it pauses and prompts the user to approve the next step.

In theory, this introduces oversight. In practice, it may introduce very little.

The “check-in-with-my-human” model is often a UX compromise disguised as a safety feature. Employees who delegated workflow to an AI agent did so because they were already overloaded. When the system interrupts them with approval prompts, the likely outcome isn’t careful review—it’s a quick rubber stamp.

We’ve seen this behavior before. Most users click through cookie consent banners without reading them. The same dynamic will apply to AI check-ins.

Meaningful oversight requires the reviewer to understand what the agent did, why it made a decision, and what the downstream consequences might be. That level of scrutiny directly conflicts with the reason the employee delegated the task in the first place.

For low-stakes activities, this approach may be sufficient. But the first time an agent executes an irreversible action that no one actually reviewed, organizations will discover just how fragile this safety model is.

Why most agentic AI projects fail, and how to avoid being one of them

How businesses can stop their AI agents from running amok

3 risks hindering enterprise-ready AI — and how low-code workflows help

Agentic AI also challenges one of the core assumptions of enterprise governance frameworks: that actions in a system are clearly attributable to a human user.

Tools like Copilot Cowork blur that line and create a major accountability gap. When an AI agent sends an email or modifies Share Point permissions, it is no longer clear whether the employee, the AI, or the productivity platform is responsible for making that change. Most governance frameworks weren't built for a world where software makes on-the-fly judgment calls autonomously.

Audit trails today assume a direct link between a user identity and an action taken within the system. When an AI agent is acting autonomously on behalf of a user, that relationship becomes murky.

To manage this risk, organizations should treat enterprise AI agents less like software features and more like digital employees.

Without these controls, compliance investigations will quickly become difficult—or impossible—to reconstruct.

Part of the challenge comes from how fundamentally different agentic AI is from traditional automation.

Tools like Power Automate or Zapier operate using deterministic workflows. Engineers define each step of a process and the logic connecting them. When triggered, the automation executes those steps exactly the same way every time.

Instead of scripting every action, users describe the outcome they want. The AI determines the path dynamically, making decisions along the way based on context.

That opens the door to automating work that previously couldn’t be automated — tasks that are messy, ambiguous, or dependent on situational judgment.

But it also introduces variability and unpredictability. Two executions of the same request may take different paths depending on context.

Organizations shouldn’t rush to replace their existing automation pipelines with agentic systems. Traditional automation still excels at repeatable, deterministic tasks.

The better approach is to apply agentic AI to workflows that were never practical to automate in the first place.

Despite the risks, agentic productivity tools are genuinely exciting. Used thoughtfully, they can reduce friction across knowledge work and free employees from administrative overhead.

Today, the safest applications tend to be tasks that are low risk but time consuming, such as:

  • Aggregating information from multiple workstreams

These are tasks that often go half-done — or undone entirely— because employees simply run out of time.

However, organizations should resist the temptation to push agentic systems into high-consequence workflows too quickly.

Until the platforms can deliver real observability, enforceable governance, and reliable rollback, organizations need to draw a hard line. And until that happens, there are certain domains that should be off-limits to agentic AI:

• Anything touching compliance or audit obligations

• Financial approvals, transactions, or budget authority

• HR and personnel decisions — hiring, terminations, disciplinary actions

• Access controls, permissions, and data governance

If your AI agent can approve a wire transfer or modify access controls without a human being in the loop, you’ve essentially created an unaudited decision-maker with admin privileges.

Agentic AI's potential is enormous. But right now, most organizations are focused on what these tools can do, not how they should be managed. And it’s not like we haven’t seen this movie before. Every major tech wave of the past three decades (web apps, BYOD, cloud, scripted bots/automation) has followed the same arc: rapid adoption, delayed governance, then painful correction.

But the difference with agentic AI is that those were all deterministic tools. Then tools did what they were told. Agentic AI doesn't follow those rules. Tools like Copilot Cowork interpret, decide, and act. Two identical prompts can produce two different outcomes that touch email, permissions, and workflows before a single human reviews them. Combine that with the fastest enterprise adoption curve we've ever seen (driven by Microsoft embedding these capabilities directly into tools people already use) and the blast radius is significantly larger in this case.

As agent-based workflows scale, the conversation must shift hard toward observability, accountability, and governance. Enterprises that treat AI agents like trusted employees, with identity, permissions, and auditability, will be far better positioned than those that treat them as just another productivity feature.

The gains to productivity alone mean tools like Copilot Cowork are here to stay. The smart organizations won't wait for something to break before they figure out how to govern them.

We've ranked the best identity management solutions.

You must confirm your public display name before commenting

1 Microsoft warns of Teams external IT impersonation attacks

2'We know you know' — Ubisoft officially announces Assassin's Creed Black Flag Resynced, which will receive a 'dedicated' worldwide reveal showcase this week

3'NHS users report that it is awful to use': Palantir could be forced to exit NHS after pushback from staff, MPs, unions, and pressure groups over Federated Data Platform

4 The best Galaxy A57 cases to keep your handset protected and looking good

5 Practical Magic 2 trailer slammed by fans for poor color lighting

Tech Radar is part of Future US Inc, an international media group and leading digital publisher. Visit our corporate site.

© Future US, Inc. Full 7th Floor, 130 West 42nd Street, New York, NY 10036.

Key Takeaways

  • News, deals, reviews, guides and more on the newest computing gadgets

  • Start exploring exclusive deals, expert advice and more

  • Unlock and manage exclusive Techradar member rewards

  • Why enterprises need governance frameworks for agentic AI

  • When you purchase through links on our site, we may earn an affiliate commission

Cut Costs with Runable

Cost savings are based on average monthly price per user for each app.

Which apps do you use?

Apps to replace

ChatGPTChatGPT
$20 / month
LovableLovable
$25 / month
Gamma AIGamma AI
$25 / month
HiggsFieldHiggsField
$49 / month
Leonardo AILeonardo AI
$12 / month
TOTAL$131 / month

Runable price = $9 / month

Saves $122 / month

Runable can save upto $1464 per year compared to the non-enterprise price of your apps.