Ask Runable forDesign-Driven General AI AgentTry Runable For Free
Runable
Back to Blog
Technology7 min read

OpenAI Had Banned Military Use. The Pentagon Tested Its Models Through Microsoft Anyway | WIRED

Sources allege the Defense Department experimented with Microsoft’s version of OpenAI technology before the ChatGPT-maker lifted its prohibition on military...

model behaviorartificial intelligenceopenaimicrosoftwar+3 more
OpenAI Had Banned Military Use. The Pentagon Tested Its Models Through Microsoft Anyway | WIRED
Listen to Article
0:00
0:00
0:00

Open AI Had Banned Military Use. The Pentagon Tested Its Models Through Microsoft Anyway | WIRED

Overview

Open AI Had Banned Military Use. The Pentagon Tested Its Models Through Microsoft Anyway

Open AI CEO Sam Altman is still in the hot seat this week after his company signed a deal with the US military. Open AI employees have criticized the move, which came after Anthropic’s roughly $200 million contract with the Pentagon imploded, and asked Altman to release more information about the agreement. Altman admitted it looked “sloppy” in a social media post.

Details

While this incident has become a major news story, it may just be the latest and most public example of Open AI creating vague policies around how the US military can access its AI.

In 2023, Open AI’s usage policy explicitly banned the military from accessing its AI models. But some Open AI employees discovered the Pentagon had already started experimenting with Azure Open AI, a version of Open AI’s models offered by Microsoft, two sources familiar with the matter said. At the time, Microsoft had been contracting with the Department of Defense for decades. It was also Open AI’s largest investor, and had broad license to commercialize the startup’s technology.

That same year, Open AI employees saw Pentagon officials walking through the company’s San Francisco offices, the sources said. They spoke on the condition of anonymity as they aren’t licensed to comment on private company matters.

Some Open AI employees were wary about associating with the Pentagon, while others were simply confused about what Open AI’s usage policies meant. Did the policy apply to Microsoft? While sources tell WIRED it was not clear to most employees at the time, spokespeople from Open AI and Microsoft say Azure Open AI products are not, and were not, subject to Open AI’s policies.

“AI is already playing a significant role in national security and we believe it’s important to have a seat at the table to help ensure it’s deployed safely and responsibly,” Open AI spokesperson Liz Bourgeois said in a statement. “We've been transparent with our employees as we’ve approached this work, providing regular updates and dedicated channels where teams can ask questions and engage directly with our national security team.”

The Department of Defense did not respond to WIRED's request for comment.

By January 2024, Open AI updated its policies to remove the blanket ban on military use. Several Open AI employees found out about the policy update through an article in The Intercept, sources say. Company leaders later addressed the change at an all-hands meeting, explaining how the company would tread carefully in this area moving forward.

In December 2024, Open AI announced a partnership with Anduril to develop and deploy AI systems for “national security missions.” Ahead of the announcement, Open AI told employees that the partnership was narrow in scope and would only deal with unclassified workloads, the same sources said. This stood in contrast to a deal Anthropic had signed with Palantir, which would see Anthropic’s AI used for classified military work.

Palantir approached Open AI in the fall of 2024 to discuss participating in their “Fed Start” program, an Open AI spokesperson confirmed to WIRED. The company ultimately turned it down, and told employees it would’ve been too high-risk, two sources familiar with the matter tell WIRED. However, Open AI now works with Palantir in other ways.

Around the time the Anduril deal was announced, a few dozen Open AI employees joined a public Slack channel to discuss their concerns about the company's military partnerships, sources say and a spokesperson confirmed. Some believed the company’s models were too unreliable to handle a user’s credit card information, let alone assist Americans on the battlefield.

Not everyone shared their concerns. Other employees felt that the Anduril partnership showed the company would handle its military partnerships responsibly. “Open AI’s approach thus far has been ‘measure twice, cut once’ when it comes to broad classified deployments. Employees are engaged on the question of what approach to national security is in line with the mission,” a current Open AI researcher tells me.

That’s partly why Open AI’s latest Pentagon deal divided employees. While Altman said publicly he supported Anthropic’s red lines—to not allow its AI to be used for legal mass surveillance or the development of autonomous weapons—the company’s agreement appeared to leave room for those very activities, according to outside legal experts.

“The biggest losers in all of this are everyday people and civilians in conflict zones,” said Sarah Shoker, the former head of Open AI’s geopolitics team, in a Substack post last week. “Our ability to understand the effects of military AI in war is and will be severely hindered due to layers of opacity caused by technical design and policy. It’s black boxes all the way down.”

Charlie Bullock, a senior research fellow with the Institute for Law and AI, told WIRED that Open AI’s public comments suggest the Pentagon may have been permitted to engage in forms of surveillance that are technically considered legal, such as buying up Americans’ user data from third-party firms and analyzing it with AI. Open AI later amended the terms of its agreement to address this specific concern, though Bullock notes that without seeing the full terms of the agreement, the public essentially has to take Open AI at its word.

“Over the weekend it became clear that the original language in the Open AI/Do W agreement left legitimate questions unanswered, especially around some novel ways that AI could potentially enable legal surveillance,” said Noam Brown, an Open AI researcher, in a social media post. Brown continued to say he’s now planning to become “more personally involved with policy at Open AI.”

Just over two years after Open AI removed its blanket ban on military use, the company seems to have embraced defense partnerships. At an all-hands meeting on Tuesday, Altman reportedly told employees that the company doesn’t get to make the call about what the defense department does with its artificial intelligence software. Altman also said he’s interested in selling the company’s AI models to NATO.

In your inbox: Our biggest stories, handpicked for you each day

In your inbox: Our biggest stories, handpicked for you each day

What a Google subpoena response looks like—courtesy of the Epstein files

What a Google subpoena response looks like—courtesy of the Epstein files

Big Story: The undersea cable that made the global internet possible

Big Story: The undersea cable that made the global internet possible

Replay: Livestream on the hype, reality, and future of EVs

Replay: Livestream on the hype, reality, and future of EVs

Key Takeaways

  • Open AI Had Banned Military Use

  • Open AI CEO Sam Altman is still in the hot seat this week after his company signed a deal with the US military

  • While this incident has become a major news story, it may just be the latest and most public example of Open AI creating vague policies around how the US military can access its AI

  • In 2023, Open AI’s usage policy explicitly banned the military from accessing its AI models

  • That same year, Open AI employees saw Pentagon officials walking through the company’s San Francisco offices, the sources said

Cut Costs with Runable

Cost savings are based on average monthly price per user for each app.

Which apps do you use?

Apps to replace

ChatGPTChatGPT
$20 / month
LovableLovable
$25 / month
Gamma AIGamma AI
$25 / month
HiggsFieldHiggsField
$49 / month
Leonardo AILeonardo AI
$12 / month
TOTAL$131 / month

Runable price = $9 / month

Saves $122 / month

Runable can save upto $1464 per year compared to the non-enterprise price of your apps.