news

OpenAI quietly removes ban on military use of its AI tools

Chris Ratliffe | Bloomberg | Getty Images

Sam Altman, CEO of OpenAI, during an interview at Bloomberg House on the opening day of the World Economic Forum in Davos, Switzerland, on Jan. 16, 2024.

  • OpenAI has quietly walked back a ban on the military use of ChatGPT and its other artificial intelligence tools, although its policies still state that users should not "use our service to harm yourself or others," including to "develop or use weapons."
  • Up until at least Wednesday, OpenAI's policies page specified that the company did not allow the usage of its models for "activity that has high risk of physical harm, including: weapons development [and] military and warfare."
  • The shift comes as OpenAI begins to work with the U.S. Department of Defense on AI tools, including open-source cybersecurity tools, according to a Tuesday interview at the World Economic Forum.

OpenAI has quietly walked back a ban on the military use of ChatGPT and its other artificial intelligence tools.

The shift comes as OpenAI begins to work with the U.S. Department of Defense on AI tools, including open-source cybersecurity tools, Anna Makanju, OpenAI's VP of global affairs, said Tuesday in a Bloomberg House interview at the World Economic Forum alongside CEO Sam Altman.

Up until at least Wednesday, OpenAI's policies page specified that the company did not allow the usage of its models for "activity that has high risk of physical harm" such as weapons development or military and warfare. OpenAI has removed the specific reference to the military, although its policy still states that users should not "use our service to harm yourself or others," including to "develop or use weapons."

"Because we previously had what was essentially a blanket prohibition on military, many people thought that would prohibit many of these use cases, which people think are very much aligned with what we want to see in the world," Makanju said.

An OpenAI spokesperson told CNBC that the goal regarding the policy change is to provide clarity and allow for military use cases the company does agree with.

"Our policy does not allow our tools to be used to harm people, develop weapons, for communications surveillance, or to injure others or destroy property," the spokesperson said. "There are, however, national security use cases that align with our mission."

The news comes after years of controversy about tech companies developing technology for military use, highlighted by the public concerns of tech workers — especially those working on AI.

Workers at virtually every tech giant involved with military contracts have voiced concerns after thousands of Google employees protested Project Maven, a Pentagon project that would use Google AI to analyze drone surveillance footage.

Microsoft employees protested a $480 million army contract that would provide soldiers with augmented-reality headsets, and more than 1,500 Amazon and Google workers signed a letter protesting a joint $1.2 billion, multiyear contract with the Israeli government and military, under which the tech giants would provide cloud computing services, AI tools and data centers.

Don't miss these stories from CNBC PRO:

Copyright CNBC
Exit mobile version