Ask Runable forDesign-Driven General AI AgentTry Runable For Free
Runable
Back to Blog
Technology7 min read

Anthropic drops its signature safety promise and rewrites AI guardrails | TechRadar

The policy overhaul highlights the growing tension between caution and competition Discover insights about anthropic drops its signature safety promise and rewr

TechnologyInnovationBest PracticesGuideTutorial
Anthropic drops its signature safety promise and rewrites AI guardrails | TechRadar
Listen to Article
0:00
0:00
0:00

Anthropic drops its signature safety promise and rewrites AI guardrails | Tech Radar

Overview

Anthropic drops its signature safety promise and rewrites AI guardrails

The policy overhaul highlights the growing tension between caution and competition

Details

When you purchase through links on our site, we may earn an affiliate commission. Here’s how it works.

Anthropic has removed its pledge not to train or release AI models without guaranteed safety mitigations in advance

The company will now rely on transparency reports and safety roadmaps instead of strict preconditions

Critics argue the shift shows the limits of voluntary AI safety commitments without binding regulation

Anthropic has formally abandoned the central promise not to train or release frontier AI systems unless it can guarantee adequate safety in advance. The company behind Claude confirmed the decision in an interview with Time, marking the end of a policy that had once set it apart among AI developers. The newly revised Responsible Scaling Policy focuses more on ensuring the company stays competitive as the AI marketplace heats up.

For years, Anthropic framed that pledge as evidence that it would resist the commercial pressures pushing competitors to ship ever more powerful systems. The policy effectively barred it from advancing beyond certain levels unless predefined safety measures were already in place. Now, Anthropic is using a more flexible framework rather than categorical pauses.

The company insists the change is pragmatic rather than ideological. Executives argue that unilateral restraint no longer makes sense in a market defined by rapid iteration and geopolitical urgency. But the shift feels like a turning point in how the AI industry thinks about self-regulation.

Pentagon may sever Anthropic relationship over AI safeguards - Claude maker expresses concerns over 'hard limits around fully autonomous weapons and mass domestic surveillance'

Anthropic claims half of its agent tool calls are to do with software engineering - so are developers letting AI take over?

Altman defends Chat GPT's safety against Musk accusations

Under the new Responsible Scaling Policy, Anthropic pledges to publish detailed “Frontier Safety Roadmaps” outlining its planned safety milestones, along with regular “Risk Reports” that assess model capabilities and potential threats. The company also says it will match or exceed competitors’ safety efforts and delay development if it both believes it leads the field and identifies significant catastrophic risk. What it will no longer do is promise to halt training until all mitigations are guaranteed in advance.

Everyday users might not notice any changes as they interact with Claude or other AI tools. Yet the guardrails that govern how those systems are trained influence everything from accuracy to fraudulent misuse. When the company, once defined by its strict preconditions, decides those conditions are no longer workable, it signals a broader recalibration within the industry.

When Anthropic introduced its original policy in 2023, some executives hoped it might inspire rivals or even inform eventual regulation. That regulatory momentum never fully materialized. Federal AI legislation remains stalled, and the broader political climate has tilted away from developing any framework. Companies are left to choose between voluntary restraint and competitive survival.

Anthropic maintains that its revised framework preserves meaningful safeguards. The new Roadmaps are intended to create internal pressure to prioritize mitigation research. The forthcoming Risk Reports aim to provide a clearer public accounting of how model capabilities might lead to misuse.

“The new policy still includes some guardrails, but the core promise, that Anthropic would not release models unless it could guarantee adequate safety mitigations in advance, is gone," said Nik Kairinos, CEO and co-founder of RAIDS AI, an organization focused on independent monitoring and risk detection in AI. “This is precisely why continuous, independent monitoring of AI systems matters. Voluntary commitments can be rewritten. Regulation, backed by real-time oversight, cannot."

Kairinos also noted the irony in Anthropic's $20 million a couple of weeks ago to Public First Action, a group supporting congressional candidates pledging to push for AI safety regulation. That contribution, he suggested, underscores the complexity of the current moment. Companies may advocate for stronger regulation while simultaneously recalibrating their own internal constraints.

Pentagon may sever Anthropic relationship over AI safeguards - Claude maker expresses concerns over 'hard limits around fully autonomous weapons and mass domestic surveillance'

Anthropic claims half of its agent tool calls are to do with software engineering - so are developers letting AI take over?

Altman defends Chat GPT's safety against Musk accusations

The broader question facing the industry is whether voluntary norms can meaningfully shape the trajectory of transformative technologies. Anthropic once attempted to anchor itself as a model of restraint. Its revised policy requires it to compensate for competition. That does not mean safety has been abandoned, but it does mean the order of operations has shifted.

The average person may not read Responsible Scaling Policies or Risk Reports, but they live with the downstream effects of those decisions. Anthropic argues that meaningful safety research requires staying at the frontier, not stepping back from it. Whether that philosophy proves reassuring or unsettling depends largely on one’s view of how fast AI should move and how much risk society is willing to tolerate in exchange for progress.

Follow Tech Radar on Google News and add us as a preferred source to get our expert news, reviews, and opinion in your feeds. Make sure to click the Follow button!

And of course you can also follow Tech Radar on Tik Tok for news, reviews, unboxings in video form, and get regular updates from us on Whats App too.

Eric Hal Schwartz is a freelance writer for Tech Radar with more than 15 years of experience covering the intersection of the world and technology. For the last five years, he served as head writer for Voicebot.ai and was on the leading edge of reporting on generative AI and large language models. He's since become an expert on the products of generative AI models, such as Open AI’s Chat GPT, Anthropic’s Claude, Google Gemini, and every other synthetic media tool. His experience runs the gamut of media, including print, digital, broadcast, and live events. Now, he's continuing to tell the stories people want and need to hear about the rapidly evolving AI space and its impact on their lives. Eric is based in New York City.

You must confirm your public display name before commenting

1 The rumored touchscreen Mac Book Pro could still be missing this key feature

2 Apple TV’s Cape Fear remake looks great — now please let me stream the original

3 Microsoft's bringing a new AI trick to Edge, and I doubt it'll go down well

4AI and deepfakes are proving to be a security nightmare for businesses everywhere

5I saw the first OLED TV with LG’s new cheap panel, and it's from Panasonic

Tech Radar is part of Future US Inc, an international media group and leading digital publisher. Visit our corporate site.

© Future US, Inc. Full 7th Floor, 130 West 42nd Street, New York, NY 10036.

Key Takeaways

  • Anthropic drops its signature safety promise and rewrites AI guardrails

  • The policy overhaul highlights the growing tension between caution and competition

  • When you purchase through links on our site, we may earn an affiliate commission

  • Anthropic has removed its pledge not to train or release AI models without guaranteed safety mitigations in advance

  • The company will now rely on transparency reports and safety roadmaps instead of strict preconditions

Cut Costs with Runable

Cost savings are based on average monthly price per user for each app.

Which apps do you use?

Apps to replace

ChatGPTChatGPT
$20 / month
LovableLovable
$25 / month
Gamma AIGamma AI
$25 / month
HiggsFieldHiggsField
$49 / month
Leonardo AILeonardo AI
$12 / month
TOTAL$131 / month

Runable price = $9 / month

Saves $122 / month

Runable can save upto $1464 per year compared to the non-enterprise price of your apps.