Ask Runable forDesign-Driven General AI AgentTry Runable For Free
Runable
Back to Blog
Technology8 min read

Employees across OpenAI and Google support Anthropic’s lawsuit against the Pentagon | The Verge

Employees of OpenAI and Google, including Gemini lead Jeff Dean, filed an amicus brief supporting Anthropic’s lawsuit challenging its supply chain risk desig...

TechnologyInnovationBest PracticesGuideTutorial
Employees across OpenAI and Google support Anthropic’s lawsuit against the Pentagon | The Verge
Listen to Article
0:00
0:00
0:00

Employees across Open AI and Google support Anthropic’s lawsuit against the Pentagon | The Verge

Overview

Tech Expand Amazon Apple Facebook Google Microsoft Samsung Business See all tech

Reviews Expand Smart Home Reviews Phone Reviews Tablet Reviews Headphone Reviews See all reviews

Details

Science Expand Space Energy Environment Health See all science

Entertainment Expand TV Shows Movies Audio See all entertainment

Policy Expand Antitrust Politics Law Security See all policy

Gadgets Expand Laptops Phones TVs Headphones Speakers Wearables See all gadgets

Verge Shopping Expand Buying Guides Deals Gift Guides See all shopping

Streaming Expand Disney HBONetflix You Tube Creators See all streaming

Transportation Expand Electric Cars Autonomous Cars Ride-sharing Scooters See all transportation

AIClose AIPosts from this topic will be added to your daily email digest and your homepage feed. Follow Follow See All AI

Posts from this topic will be added to your daily email digest and your homepage feed.

Tech Close Tech Posts from this topic will be added to your daily email digest and your homepage feed. Follow Follow See All Tech

Posts from this topic will be added to your daily email digest and your homepage feed.

Anthropic Close Anthropic Posts from this topic will be added to your daily email digest and your homepage feed. Follow Follow See All Anthropic

Posts from this topic will be added to your daily email digest and your homepage feed.

Employees across Open AI and Google support Anthropic’s lawsuit against the Pentagon

The amicus brief was the latest barb in negotiations with the Department of Defense.

The amicus brief was the latest barb in negotiations with the Department of Defense.

Posts from this author will be added to your daily email digest and your homepage feed.

Posts from this author will be added to your daily email digest and your homepage feed.

On Monday, Anthropic filed its lawsuit against the Department of Defense over being designated as a supply chain risk. Hours later, nearly 40 employees from Open AI and Google — including Jeff Dean, Google’s chief scientist and Gemini lead — filed an amicus brief in support of Anthropic’s lawsuit, detailing their concerns over the Trump administration’s decision and the technology’s risks and implications.

The news follows a dramatic few weeks for Anthropic, in which the Trump administration labeled the company a supply chain risk — a designation typically reserved for foreign companies that the government deems a potential risk to national security in some way — after Anthropic stood firm on two red lines regarding acceptable use cases for military use of its technology: domestic mass surveillance and fully autonomous weapons (or AI systems with the power to kill with no human involvement). Negotiations broke down, followed by public insults and other AI companies stepping in to sign contracts allowing “any lawful use” of their technology.

The supply chain risk designation not only prevents Anthropic from working on military contracts, it also blacklists other companies if they used Anthropic products in their line of work for the Pentagon, forcing them to uproot Claude if they wished to maintain their lucrative contracts. As the first model cleared for classified intelligence, however, Anthropic’s tools are already deeply integrated into the Pentagon’s work — so much so that just hours after Defense Secretary Pete Hegseth announced the designation, the U. S. military reportedly used Claude in the campaign that killed the leader of Iran, Ayatollah Ali Khamenei.

The amicus brief seeks to make the points that Anthropic’s supply chain risk designation “is improper retaliation that harms the public interest” and that the concerns behind Anthropic’s red lines “are real and require a response.” It also makes the point that Anthropic’s two red lines are worth revisiting, stating that “mass domestic surveillance powered by AI poses profound risks to democratic governance — even in responsible hands” and that “fully autonomous lethal weapons systems present risks that must also be addressed.”

The group behind the amicus brief described themselves as “engineers, researchers, scientists, and other professionals employed at U. S. frontier artificial intelligence laboratories.”

“We build, train, and study the large-scale AI systems that serve a wide range of users and deployments, including in the consequential domains of national security, law enforcement, and military operations,” the group wrote. “We submit this brief not as spokespeople for any single company, but in our individual capacities as professionals with direct knowledge of what these systems can and cannot do, and what is at stake when their deployment outpaces the legal and ethical frameworks designed to govern them.”

On the domestic mass surveillance front, the group said that though data on American citizens exists everywhere in the form of surveillance cameras, geolocation data, social media posts, financial transactions, and more, “what does not yet exist is the AI layer that transforms this sprawling, fragmented data landscape into a unified, real-time surveillance apparatus.” Right now, they wrote, these data streams are siloed, but if AI were used to connect them, it could combine “face recognition data with location history, transaction records, social graphs, and behavioral patterns across hundreds of millions of people simultaneously.”

When it comes to lethal autonomous weapons specifically, the group said that they can be unreliable in new or unclear conditions that don’t align with the environment they were trained in — meaning that they “cannot be trusted to identify targets with perfect accuracy, and they are incapable of making the subtle contextual tradeoffs between achieving an objective and accounting for collateral effects that a human can.” Additionally, the group wrote, lethal autonomous weapons systems’ potential for hallucination means that it’s important for humans to be involved in the decision-making process “before a lethal munition is launched at a human target” — especially since the system’s chain of reasoning is often not available to operators and unclear even to the system’s developers.

The group behind the amicus brief wrote, “We are diverse in our politics and philosophies, but we are united in the conviction that today’s frontier AI systems present risks when deployed to enable domestic mass surveillance or the operation of autonomous lethal weapons systems without human oversight, and that those risks require some kind of guardrails, whether via technical safeguards or usage restrictions.”

Hayden Field Close Hayden Field Senior AI Reporter Posts from this author will be added to your daily email digest and your homepage feed. Follow Follow See All by Hayden Field

Posts from this author will be added to your daily email digest and your homepage feed.

Tina Nguyen Close Tina Nguyen Senior Reporter, Washington Posts from this author will be added to your daily email digest and your homepage feed. Follow Follow See All by Tina Nguyen

Posts from this author will be added to your daily email digest and your homepage feed.

AIClose AIPosts from this topic will be added to your daily email digest and your homepage feed. Follow Follow See All AI

Posts from this topic will be added to your daily email digest and your homepage feed.

Anthropic Close Anthropic Posts from this topic will be added to your daily email digest and your homepage feed. Follow Follow See All Anthropic

Posts from this topic will be added to your daily email digest and your homepage feed.

Google Close Google Posts from this topic will be added to your daily email digest and your homepage feed. Follow Follow See All Google

Posts from this topic will be added to your daily email digest and your homepage feed.

Open AIClose Open AIPosts from this topic will be added to your daily email digest and your homepage feed. Follow Follow See All Open AI

Posts from this topic will be added to your daily email digest and your homepage feed.

Tech Close Tech Posts from this topic will be added to your daily email digest and your homepage feed. Follow Follow See All Tech

Posts from this topic will be added to your daily email digest and your homepage feed.

Apple is going high-end with new ‘Ultra’ products next

Sony appears to be testing dynamic pricing on Play Station games

The i Phone 17E is good, but you probably shouldn’t buy it

Key Takeaways

  • Tech Expand Amazon Apple Facebook Google Microsoft Samsung Business See all tech
  • Reviews Expand Smart Home Reviews Phone Reviews Tablet Reviews Headphone Reviews See all reviews
  • Science Expand Space Energy Environment Health See all science
  • Entertainment Expand TV Shows Movies Audio See all entertainment
  • Policy Expand Antitrust Politics Law Security See all policy

Cut Costs with Runable

Cost savings are based on average monthly price per user for each app.

Which apps do you use?

Apps to replace

ChatGPTChatGPT
$20 / month
LovableLovable
$25 / month
Gamma AIGamma AI
$25 / month
HiggsFieldHiggsField
$49 / month
Leonardo AILeonardo AI
$12 / month
TOTAL$131 / month

Runable price = $9 / month

Saves $122 / month

Runable can save upto $1464 per year compared to the non-enterprise price of your apps.