Microsoft Pitched OpenAI’s DALL-E as Battlefield Tool for U.S. Military

ARTIFICIAL INTELLIGENCE-AI, 15 Apr 2024

Sam Biddle | The Intercept - TRANSCEND Media Service

In this photo illustration, a DALL-E 2 software logo is seen on a smartphone screen in Ukraine on 2 Feb 2023.
Photo Illustration by Pavlo GoncharSOPA Images/LightRocket via Getty Images

Any battlefield use of the software would be a dramatic turnaround for OpenAI, which describes its mission as developing AI that can benefit all of humanity.

10 Apr 2024Microsoft last year proposed using OpenAI’s mega-popular image generation tool, DALL-E, to help the Department of Defense build software to execute military operations, according to internal presentation materials reviewed by The Intercept. The revelation comes just months after OpenAI silently ended its prohibition against military work.

The Microsoft presentation deck, titled “Generative AI with DoD Data,” provides a general breakdown of how the Pentagon can make use of OpenAI’s machine learning tools, including the immensely popular ChatGPT text generator and DALL-E image creator, for tasks ranging from document analysis to machine maintenance. (Microsoft invested $10 billion in the ascendant machine learning startup last year, and the two businesses have become tightly intertwined. In February, The Intercept and other digital news outlets sued Microsoft and OpenAI for using their journalism without permission or credit.)

The Microsoft document is drawn from a large cache of materials presented at an October 2023 Department of Defense “AI literacy” training seminar hosted by the U.S. Space Force in Los Angeles. The event included a variety of presentation from machine learning firms, including Microsoft and OpenAI, about what they have to offer the Pentagon.

The publicly accessible files were found on the website of Alethia Labs, a nonprofit consultancy that helps the federal government with technology acquisition, and discovered by journalist Jack Poulson. On Wednesday, Poulson published a broader investigation into the presentation materials. Alethia Labs has worked closely with the Pentagon to help it quickly integrate artificial intelligence tools into its arsenal, and since last year has contracted with the Pentagon’s main AI office. The firm did not respond to a request for comment.

One page of the Microsoft presentation highlights a variety of “common” federal uses for OpenAI, including for defense. One bullet point under “Advanced Computer Vision Training” reads: “Battle Management Systems: Using the DALL-E models to create images to train battle management systems.” Just as it sounds, a battle management system is a command-and-control software suite that provides military leaders with a situational overview of a combat scenario, allowing them to coordinate things like artillery fire, airstrike target identification, and troop movements. The reference to computer vision training suggests artificial images conjured by DALL-E could help Pentagon computers better “see” conditions on the battlefield, a particular boon for finding — and annihilating — targets.

In an emailed statement, Microsoft told The Intercept that while it had pitched the Pentagon on using DALL-E to train its battlefield software, it had not begun doing so. “This is an example of potential use cases that was informed by conversations with customers on the art of the possible with generative AI.” Microsoft, which declined to attribute the remark to anyone at the company, did not explain why a “potential” use case was labeled as a “common” use in its presentation.

OpenAI spokesperson Liz Bourgeous said OpenAI was not involved in the Microsoft pitch and that it had not sold any tools to the Department of Defense. “OpenAI’s policies prohibit the use of our tools to develop or use weapons, injure others or destroy property,” she wrote. “We were not involved in this presentation and have not had conversations with U.S. defense agencies regarding the hypothetical use cases it describes.”

Bourgeous added, “We have no evidence that OpenAI models have been used in this capacity. OpenAI has no partnerships with defense agencies to make use of our API or ChatGPT for such purposes.”

At the time of the presentation, OpenAI’s policies seemingly would have prohibited a military use of DALL-E. Microsoft told The Intercept that if the Pentagon used DALL-E or any other OpenAI tool through a contract with Microsoft, it would be subject to the usage policies of the latter company. Still, any use of OpenAI technology to help the Pentagon more effectively kill and destroy would be a dramatic turnaround for the company, which describes its mission as developing safety-focused artificial intelligence that can benefit all of humanity.

“It’s not possible to build a battle management system in a way that doesn’t, at least indirectly, contribute to civilian harm.”

“It’s not possible to build a battle management system in a way that doesn’t, at least indirectly, contribute to civilian harm,” Brianna Rosen, a visiting fellow at Oxford University’s Blavatnik School of Government who focuses on technology ethics.

Rosen, who worked on the National Security Council during the Obama administration, explained that OpenAI’s technologies could just as easily be used to help people as to harm them, and their use for the latter by any government is a political choice. “Unless firms such as OpenAI have written guarantees from governments they will not use the technology to harm civilians — which still probably would not be legally-binding — I fail to see any way in which companies can state with confidence that the technology will not be used (or misused) in ways that have kinetic effects.”

The presentation document provides no further detail about how exactly battlefield management systems could use DALL-E. The reference to training these systems, however, suggests that DALL-E could be to used to furnish the Pentagon with so-called synthetic training data: artificially created scenes that closely resemble germane, real-world imagery. Military software designed to detect enemy targets on the ground, for instance, could be shown a massive quantity of fake aerial images of landing strips or tank columns generated by DALL-E in order to better recognize such targets in the real world.

Even putting aside ethical objections, the efficacy of such an approach is debatable. “It’s known that a model’s accuracy and ability to process data accurately deteriorates every time it is further trained on AI-generated content,” said Heidy Khlaaf, a machine learning safety engineer who previously contracted with OpenAI. “Dall-E images are far from accurate and do not generate images reflective even close to our physical reality, even if they were to be fine-tuned on inputs of Battlefield management system. These generative image models cannot even accurately generate a correct number of limbs or fingers, how can we rely on them to be accurate with respect to a realistic field presence?”

In an interview last month with the Center for Strategic and International Studies, Capt. M. Xavier Lugo of the U.S. Navy envisioned a military application of synthetic data exactly like the kind DALL-E can crank out, suggesting that faked images could be used to train drones to better see and recognize the world beneath them.

Lugo, mission commander of the Pentagon’s generative AI task force and member of the Department of Defense Chief Digital and Artificial Intelligence Office, is listed as a contact at the end of the Microsoft presentation document. The presentation was made by Microsoft employee Nehemiah Kuhns, a “technology specialist” working on the Space Force and Air Force.

The Air Force is currently building the Advanced Battle Management System, its portion of a broader multibillion-dollar Pentagon project called the Joint All-Domain Command and Control, which aims to network together the entire U.S. military for expanded communication across branches, AI-powered data analysis, and, ultimately, an improved capacity to kill. Through JADC2, as the project is known, the Pentagon envisions a near-future in which Air Force drone cameras, Navy warship radar, Army tanks, and Marines on the ground all seamlessly exchange data about the enemy in order to better destroy them.

On April 3, U.S. Central Command revealed it had already begun using elements of JADC2 in the Middle East.

The Department of Defense didn’t answer specific questions about the Microsoft presentation, but spokesperson Tim Gorman told The Intercept that “the [Chief Digital and Artificial Intelligence Office’s] mission is to accelerate the adoption of data, analytics, and AI across DoD. As part of that mission, we lead activities to educate the workforce on data and AI literacy, and how to apply existing and emerging commercial technologies to DoD mission areas.”

While Microsoft has long reaped billions from defense contracts, OpenAI only recently acknowledged it would begin working with the Department of Defense. In response to The Intercept’s January report on OpenAI’s military-industrial about face, the company’s spokesperson Niko Felix said that even under the loosened language, “Our policy does not allow our tools to be used to harm people, develop weapons, for communications surveillance, or to injure others or destroy property.”

“The point is you’re contributing to preparation for warfighting.”

Whether the Pentagon’s use of OpenAI software would entail harm or not might depend on a literal view of how these technologies work, akin to arguments that the company that helps build the gun or trains the shooter is not responsible for where it’s aimed or pulling the trigger. “They may be threading a needle between the use of [generative AI] to create synthetic training data and its use in actual warfighting,” said Lucy Suchman, professor emerita of anthropology of science and technology at Lancaster University. “But that would be a spurious distinction in my view, because the point is you’re contributing to preparation for warfighting.”

Unlike OpenAI, Microsoft has little pretense about forgoing harm in its “responsible AI” document and openly promotes the military use of its machine learning tools.

Following its policy reversal, OpenAI was also quick to emphasize to the public and business press that its collaboration with the military was of a defensive, peaceful nature. In a January interview at Davos responding to The Intercept’s reporting, OpenAI vice president of global affairs Anna Makanju assured panel attendees that the company’s military work was focused on applications like cybersecurity initiatives and veteran suicide prevention, and that the company’s groundbreaking machine learning tools were still forbidden from causing harm or destruction.

Contributing to the development of a battle management system, however, would place OpenAI’s military work far closer to warfare itself. While OpenAI’s claim of avoiding direct harm could be technically true if its software does not directly operate weapons systems, Khlaaf, the machine learning safety engineer, said, its “use in other systems, such as military operation planning or battlefield assessments” would ultimately impact “where weapons are deployed or missions are carried out.”

Indeed, it’s difficult to imagine a battle whose primary purpose isn’t causing bodily harm and property damage. An Air Force press release from March, for example, describes a recent battle management system exercise as delivering “lethality at the speed of data.”

Other materials from the AI literacy seminar series make clear that “harm” is, ultimately, the point. A slide from a welcome presentation given the day before Microsoft’s asks the question, “Why should we care?” The answer: “We have to kill bad guys.” In a nod to the “literacy” aspect of the seminar, the slide adds, “We need to know what we’re talking about… and we don’t yet.”

Update: April 11, 2024
This article was updated to clarify Microsoft’s promotion of its work with the Department of Defense.

_______________________________________________

Sam Biddle
sam.biddle@theintercept.com
@sambiddle.29 on Signal
@sambiddle.bsky.social on Bluesky
@samfbiddle on X

 

Go to Original – theintercept.com


Tags: , , , , , ,

Share this article:


DISCLAIMER: The statements, views and opinions expressed in pieces republished here are solely those of the authors and do not necessarily represent those of TMS. In accordance with title 17 U.S.C. section 107, this material is distributed without profit to those who have expressed a prior interest in receiving the included information for research and educational purposes. TMS has no affiliation whatsoever with the originator of this article nor is TMS endorsed or sponsored by the originator. “GO TO ORIGINAL” links are provided as a convenience to our readers and allow for verification of authenticity. However, as originating pages are often updated by their originating host sites, the versions posted may not match the versions our readers view when clicking the “GO TO ORIGINAL” links. This site contains copyrighted material the use of which has not always been specifically authorized by the copyright owner. We are making such material available in our efforts to advance understanding of environmental, political, human rights, economic, democracy, scientific, and social justice issues, etc. We believe this constitutes a ‘fair use’ of any such copyrighted material as provided for in section 107 of the US Copyright Law. In accordance with Title 17 U.S.C. Section 107, the material on this site is distributed without profit to those who have expressed a prior interest in receiving the included information for research and educational purposes. For more information go to: http://www.law.cornell.edu/uscode/17/107.shtml. If you wish to use copyrighted material from this site for purposes of your own that go beyond ‘fair use’, you must obtain permission from the copyright owner.

There are no comments so far.

Join the discussion!

We welcome debate and dissent, but personal — ad hominem — attacks (on authors, other users or any individual), abuse and defamatory language will not be tolerated. Nor will we tolerate attempts to deliberately disrupt discussions. We aim to maintain an inviting space to focus on intelligent interactions and debates.

99 − 94 =

Note: we try to save your comment in your browser when there are technical problems. Still, for long comments we recommend that you copy them somewhere else as a backup before you submit them.

This site uses Akismet to reduce spam. Learn how your comment data is processed.