OpenAI policies got a quiet update, removing ban on military and warfare applications

The major rewrite still prohibits uses that "harm yourself or others."
By Chase DiBenedetto  on 
A phone displays the OpenAI logo bathed in green light.
Credit: Dilara Irem Sancar / Anadolu via Getty Images

OpenAI may be paving the way toward finding out its AI's military potential.

First reported by the Intercept on Jan 12., a new company policy change has completely removed previous language that banned “activity that has high risk of physical harm," including specific examples of “weapons development” and “military and warfare.”

As of Jan. 10, OpenAI's usage guidelines no longer included a prohibition on "military and warfare" uses in existing language that obligates users to prevent harm. The policy now only notes a ban on utilizing OpenAI technology, like its Large Language Models (LLMs), to "develop or use weapons."

Subsequent reporting on the policy edit pointed to the immediate possibility of lucrative partnerships between OpenAI and defense departments seeking to utilize generative AI in administrative or intelligence operations.

In Nov. 2023, the U.S. Department of Defense issued a statement on its mission to promote "the responsible military use of artificial intelligence and autonomous systems," citing the country's endorsement of the international Political Declaration on Responsible Military Use of Artificial Intelligence and Autonomy — an American-led "best practices" announced in Feb. 2023 that was developed to monitor and guide the development of AI military capabilities.

"Military AI capabilities includes not only weapons but also decision support systems that help defense leaders at all levels make better and more timely decisions, from the battlefield to the boardroom, and systems relating to everything from finance, payroll, and accounting, to the recruiting, retention, and promotion of personnel, to collection and fusion of intelligence, surveillance, and reconnaissance data," the statement explains.

AI has already been utilized by the American military in the Russian-Ukrainian war and in the development of AI-powered autonomous military vehicles. Elsewhere, AI has been incorporated into military intelligence and targeting systems, including an AI system known as "The Gospel," being used by Israeli forces to pinpoint targets and reportedly "reduce human casualties" in its attacks on Gaza.

AI watchdogs and activists have consistently expressed concern over the increasing incorporation of AI technologies in both cyber conflict and combat, fearing an escalation of arms conflict in addition to long-noted AI system biases.

In a statement to the Intercept, OpenAI spokesperson Niko Felix explained the change was intended to streamline the company's guidelines: "We aimed to create a set of universal principles that are both easy to remember and apply, especially as our tools are now globally used by everyday users who can now also build GPTs. A principle like ‘Don’t harm others’ is broad yet easily grasped and relevant in numerous contexts. Additionally, we specifically cited weapons and injury to others as clear examples."

An OpenAI spokesperson further clarified the change in an email to Mashable: "Our policy does not allow our tools to be used to harm people, develop weapons, for communications surveillance, or to injure others or destroy property. There are, however, national security use cases that align with our mission. For example, we are already working with DARPA to spur the creation of new cybersecurity tools to secure open source software that critical infrastructure and industry depend on. It was not clear whether these beneficial use cases would have been allowed under “military” in our previous policies. So the goal with our policy update is to provide clarity and the ability to have these discussions."

OpenAI introduces its usage policies in a more simplistic refrain: "We aim for our tools to be used safely and responsibly, while maximizing your control over how you use them."

UPDATE: Jan. 16, 2024, 12:28 p.m. EST This article has been updated to include an additional statement from OpenAI.

Chase sits in front of a green framed window, wearing a cheetah print shirt and looking to her right. On the window's glass pane reads "Ricas's Tostadas" in red lettering.
Chase DiBenedetto
Social Good Reporter

Chase joined Mashable's Social Good team in 2020, covering online stories about digital activism, climate justice, accessibility, and media representation. Her work also touches on how these conversations manifest in politics, popular culture, and fandom. Sometimes she's very funny.


Recommended For You
I spent a week using AI tools in my daily life. Here's how it went.
An illustration of a woman sitting in a chair using futuristic screens to do work.


Yelp introduces AI-generated summaries of restaurants, bars, and more
Yelp AI business review summary on a smartphone against an orange abstract background


28 of the best AI and ChatGPT courses you can take online for free
AI on laptop

More in Tech
The internet is freaking out about reheated rice. Should you be worried?
A man reheating rice

CERN's Large Hadron Collider is looking for dark photons. But... why?
one of the LHC particle accelerator's tunnels


How Oppenheimer built an atomic bomb before the Nazis
An illustration of Oppenheimer


Trending on Mashable
NYT Connections today: See hints and answers for March 9
A phone displaying the New York Times game 'Connections.'

Wordle today: Here's the answer and hints for March 9
a phone displaying Wordle

NYT Connections today: See hints and answers for March 8
A phone displaying the New York Times game 'Connections.'

Wordle today: Here's the answer and hints for March 8
a phone displaying Wordle

NYT's The Mini crossword answers for March 8
Closeup view of crossword puzzle clues
The biggest stories of the day delivered to your inbox.
This newsletter may contain advertising, deals, or affiliate links. Subscribing to a newsletter indicates your consent to our Terms of Use and Privacy Policy. You may unsubscribe from the newsletters at any time.
Thanks for signing up. See you at your inbox!