Always-on AI Agents put everything hackers could ever want behind a single attack surface | Tech Radar
Overview
News, deals, reviews, guides and more on the newest smartphones
News, deals, reviews, guides and more on the newest computing gadgets
Details
Start exploring exclusive deals, expert advice and more
Unlock and manage exclusive Techradar member rewards.
Always-on AI Agents put everything hackers could ever want behind a single attack surface
Always-on AI agents boost productivity but heighten security risks
When you purchase through links on our site, we may earn an affiliate commission. Here’s how it works.
Unlock instant access to exclusive member features.
Get full access to premium articles, exclusive features and a growing list of member rewards.
Always-on AI agents like Open Claw are a promising step towards a new generation of powerful digital assistants capable of handling users’ day-to-day “life admin".
But if you want an AI agent to book you a table in a restaurant, respond to your emails, do your shopping, or make a doctor’s appointment, you may be opening up your digital life to a frightening new level of risk.
Open Claw (formerly Moltbot and Clawdbot) set the internet ablaze last month, racking up over 100,000 Git Hub stars in a week (which is virtually unheard of).
The mobile app traffic your security team can't see — and AI agents are generating it
How businesses can stop their AI agents from running amok
Open Claw is making terrifying mistakes showing AI agents aren't ready for real responsibility
Its overnight success (and subsequent absorption into Sam Altman’s Open AI) speaks to the wider excitement around the potential for AI agents to usher in the next stage for AI applications.
But, as power users and SMBs rush to deploy persistent AI agents, handing them the power to browse the web, manage files, connect to inboxes and interact with other agents on their behalf, alarm bells are ringing in the cybersecurity space.
Why always-on agents create a fundamentally different risk profile to chatbots
Right now, the vast majority of AI users do so through chatbot sessions, where the user’s own systems are protected by the constrained nature of the interaction. You add your data to the model, get your answer, and close the window.
Always-on AI agents are a different matter entirely.
The selling point behind Open Claw and other AI agents is that they can perform real-world tasks on behalf of users. Set up Open Claw to run locally on your computer, and it's capable of reading and writing files, executing scripts and interacting with external services, including other AI agents.
This level of integration, bringing an AI agent into the operating system layer, with what amounts to root access, is what makes AI agents like Open Claw work. It also imperils the security of the entire system.
Small teams and individual power users are often self-hosting agents, wiring them into Gmail, Slack, AWS, Git Hub and Stripe, and deploying them with minimal friction. But this “minimal friction” comes at the cost of minimal guardrails.
Shadow AI 'double agents' are outpacing security visibility – and that's a serious concern for UK businesses
Friend or foe? AI: The new cybersecurity threat and solutions
'No one asked them to': Security experts warn malicious AI agents can team up to launch cyberattacks
This isn’t a critique of any one framework, but rather a sign that the ecosystem is moving faster than its security model. An over-permissioned agent could delete or modify critical files, leak sensitive data through logs or memory, post on social media without review, or trigger costly API calls or transactions.
A single vulnerability can expose the user’s entire digital life.
Agent-to-agent ecosystems represent a new kind of attack surface, exacerbating the threat of prompt injection. According to recent research from Gartner, over 50% of successful cybersecurity attacks against AI agents in the coming year are expected to exploit access control issues.
Prompt injection, a kind of social engineering attack that targets AI specifically, involves a third party misleading the AI model by injecting malicious instructions into the conversation context.
In the same way that a phishing email tries to trick people into giving away sensitive information, “prompt injections attempt to trick AIs into doing something you did not ask for,” according to an Open AI blog post.
This approach, combined with the inflated power of an AI agent, can have a more profound effect than making a chatbot give the wrong answer to a question.
In a sector defined by small developer teams, a DIY approach, and an emphasis on speed over safety, the people experimenting with autonomous agents today should be taking practical steps to reduce the risks associated with these new technologies.
Create dedicated accounts: Don’t give agents access to your primary inbox or root cloud credentials. Use scoped service accounts.
Segment environments: Separate experimental agents from production systems.
Rotate keys frequently: Assume credentials will leak eventually.
Red-team your own setup: Attempt prompt injection and tool misuse scenarios to see how the agent behaves.
Disable auto-execution for high-risk tools: Require confirmation for financial, administrative or destructive actions.
Audit exposed instances: Ensure your self-hosted agent isn’t reachable from the public internet without authentication.
Above all, keep an eye on your AI agents. It may feel like lessening the effectiveness of a tool, the whole point of which is ostensibly to spend less time babysitting your digital life.
But, if you wouldn’t be 100% trusting of a new human hire with access to your bank accounts and social media presence, an always-on AI agent deserves a similar level of scrutiny.
Agentic AI is on track to shape the next decade in terms of how we think about productivity. The ability to delegate complex, multi-step workflows to virtual agents will be transformative. But autonomy without the necessary guardrails is exposure, not innovation.
Because, unlike the AI solutions that came before them, AI agents aren’t just answering your questions or drafting your emails for review. They’re acting on your behalf. Right now, they’re just one vulnerability away from acting on somebody else’s.
We've featured the best endpoint protection software.
This article was produced as part of Tech Radar Pro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of Tech Radar Pro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro
You must confirm your public display name before commenting
1 This one accessory leveled up my Play Station Portal and PS5 experience
2'The best skill that you can learn is learn to learn....don't assume that whatever you learned is now good for the next 40 years: AWS CEO says workers need to keep adapting to deal with AI
3'This puts organizations at risk of credential theft, data manipulation and broader compromise': UK government, Microsoft warn Russian hackers are hitting TP-Link home routers to hijack internet traffic
4 Is this what playing on PS6 will be like? Cyberpunk 2077 finally has a dedicated PS5 Pro patch and it's the best looking console version to date
5 How to pick a website template that complements your brand identity
Tech Radar is part of Future US Inc, an international media group and leading digital publisher. Visit our corporate site.
© Future US, Inc. Full 7th Floor, 130 West 42nd Street, New York, NY 10036.
Key Takeaways
- News, deals, reviews, guides and more on the newest smartphones
- News, deals, reviews, guides and more on the newest computing gadgets
- Start exploring exclusive deals, expert advice and more
- Unlock and manage exclusive Techradar member rewards
-
Always-on AI Agents put everything hackers could ever want behind a single attack surface



