- OpenAI and Anduril on Wednesday announced a partnership allowing the defense tech company to deploy advanced AI systems for "national security missions."
- It's part of a broader and controversial trend of AI companies not only walking back bans on military use of their products, but also entering into partnerships with defense industry giants.
- Last month, Anthropic and defense contractor Palantir announced a partnership with Amazon Web Services to "provide U.S. intelligence and defense agencies access" to Anthropic's AI systems.
OpenAI and Anduril on Wednesday announced a partnership allowing the defense tech company to deploy advanced artificial intelligence systems for "national security missions."
It's part of a broader, and controversial, trend of AI companies not only walking back bans on military use of their products, but also entering into partnerships with defense industry giants and the U.S. Department of Defense.
Last month, Anthropic, the Amazon-backed AI startup founded by ex-OpenAI research executives, and defense contractor Palantir announced a partnership with Amazon Web Services to "provide U.S. intelligence and defense agencies access to [Anthropic's] Claude 3 and 3.5 family of models on AWS." This fall, Palantir signed a new five-year, up to $100 million contract to expand U.S. military access to its Maven AI warfare program.
The OpenAI-Anduril partnership announced Wednesday will "focus on improving the nation's counter-unmanned aircraft systems (CUAS) and their ability to detect, assess and respond to potentially lethal aerial threats in real-time," according to a release, which added that "Anduril and OpenAI will explore how leading edge AI models can be leveraged to rapidly synthesize time-sensitive data, reduce the burden on human operators, and improve situational awareness."
Anduril, co-founded by Palmer Luckey, did not answer a question about whether reducing the onus on human operators will translate to fewer humans in the loop on high-stakes warfare decisions. Luckey founded Oculus VR, which he sold to Facebook in 2014.
OpenAI said it was working with Anduril to help human operators make decisions "to protect U.S. military personnel on the ground from unmanned drone attacks." The company said it stands by the policy in its mission statement of prohibiting use of its AI systems to harm others.
Money Report
The news comes after Microsoft-backed OpenAI in January quietly removed a ban on the military use of ChatGPT and its other AI tools, just as it had begun to work with the U.S. Department of Defense on AI tools, including open-source cybersecurity tools.
Until early January, OpenAI's policies page specified that the company did not allow the usage of its models for "activity that has high risk of physical harm" such as weapons development or military and warfare. In mid-January, OpenAI removed the specific reference to the military, although its policy still states that users should not "use our service to harm yourself or others," including to "develop or use weapons."
Get a weekly recap of the latest San Francisco Bay Area housing news. Sign up for NBC Bay Area’s Housing Deconstructed newsletter.
The news comes after years of controversy about tech companies developing technology for military use, highlighted by the public concerns of tech workers — especially those working on AI.
Employees at virtually every tech giant involved with military contracts have voiced concerns after thousands of Google employees protested Project Maven, a Pentagon project that would use Google AI to analyze drone surveillance footage.
Microsoft employees protested a $480 million army contract that would provide soldiers with augmented-reality headsets, and more than 1,500 Amazon and Google workers signed a letter protesting a joint $1.2 billion, multiyear contract with the Israeli government and military, under which the tech giants would provide cloud computing services, AI tools and data centers.
-- CNBC's Morgan Brennan contributed to this report.