{"id":259479,"date":"2024-04-15T12:00:22","date_gmt":"2024-04-15T11:00:22","guid":{"rendered":"https:\/\/www.transcend.org\/tms\/?p=259479"},"modified":"2024-04-12T07:13:43","modified_gmt":"2024-04-12T06:13:43","slug":"microsoft-pitched-openais-dall-e-as-battlefield-tool-for-u-s-military","status":"publish","type":"post","link":"https:\/\/www.transcend.org\/tms\/2024\/04\/microsoft-pitched-openais-dall-e-as-battlefield-tool-for-u-s-military\/","title":{"rendered":"Microsoft Pitched OpenAI\u2019s DALL-E as Battlefield Tool for U.S. Military"},"content":{"rendered":"<div id=\"attachment_259480\" style=\"width: 410px\" class=\"wp-caption aligncenter\"><a href=\"https:\/\/www.transcend.org\/tms\/wp-content\/uploads\/2024\/04\/dalle2-pentagon-ai.webp\" ><img loading=\"lazy\" decoding=\"async\" aria-describedby=\"caption-attachment-259480\" class=\"wp-image-259480\" src=\"https:\/\/www.transcend.org\/tms\/wp-content\/uploads\/2024\/04\/dalle2-pentagon-ai-1024x512.webp\" alt=\"\" width=\"400\" height=\"200\" srcset=\"https:\/\/www.transcend.org\/tms\/wp-content\/uploads\/2024\/04\/dalle2-pentagon-ai-1024x512.webp 1024w, https:\/\/www.transcend.org\/tms\/wp-content\/uploads\/2024\/04\/dalle2-pentagon-ai-300x150.webp 300w, https:\/\/www.transcend.org\/tms\/wp-content\/uploads\/2024\/04\/dalle2-pentagon-ai-768x384.webp 768w, https:\/\/www.transcend.org\/tms\/wp-content\/uploads\/2024\/04\/dalle2-pentagon-ai-1536x768.webp 1536w, https:\/\/www.transcend.org\/tms\/wp-content\/uploads\/2024\/04\/dalle2-pentagon-ai-2048x1024.webp 2048w\" sizes=\"auto, (max-width: 400px) 100vw, 400px\" \/><\/a><p id=\"caption-attachment-259480\" class=\"wp-caption-text\">In this photo illustration, a DALL-E 2 software logo is seen on a smartphone screen in Ukraine on 2 Feb 2023.<br \/>Photo Illustration by Pavlo GoncharSOPA Images\/LightRocket via Getty Images<\/p><\/div>\n<blockquote><p><em>Any battlefield use of the software would be a dramatic turnaround for OpenAI, which describes its mission as developing AI that can benefit all of humanity. <\/em><\/p><\/blockquote>\n<p><em>10 Apr 2024<\/em> &#8211; <span class=\"has-underline\">Microsoft last year<\/span> proposed using OpenAI\u2019s mega-popular image generation tool, DALL-E, to help the Department of Defense build software to execute military operations, according to internal presentation materials reviewed by The Intercept. The revelation comes just months after OpenAI <a target=\"_blank\" href=\"https:\/\/theintercept.com\/2024\/01\/12\/open-ai-military-ban-chatgpt\" >silently ended<\/a> its prohibition against military work.<\/p>\n<p>The Microsoft presentation deck, titled \u201c<a href=\"https:\/\/www.documentcloud.org\/documents\/24538175-generative-ai-with-dod-data_microsoft\"  target=\"_blank\" rel=\"noopener noreferrer\" aria-describedby=\"targetBlankDescription\">Generative AI with DoD Data<\/a>,\u201d provides a general breakdown of how the Pentagon can make use of OpenAI\u2019s machine learning tools, including the immensely popular ChatGPT text generator and DALL-E image creator, for tasks ranging from document analysis to machine maintenance. (Microsoft invested $10 billion in the ascendant machine learning startup last year, and the two businesses have become tightly intertwined. In February, The Intercept and other digital news outlets\u00a0<a href=\"https:\/\/www.loevy.com\/wp-content\/uploads\/2024\/02\/Intercept-v.-OpenAI-Complaint-Filed.pdf\"  target=\"_blank\" rel=\"noopener noreferrer\" aria-describedby=\"targetBlankDescription\">sued Microsoft and OpenAI<\/a>\u00a0for using their journalism without permission or credit.)<\/p>\n<p>The Microsoft document is drawn from a large cache of materials presented at an October 2023 Department of Defense \u201cAI literacy\u201d training seminar hosted by the U.S. Space Force in Los Angeles. The event included a variety of presentation from machine learning firms, including Microsoft and OpenAI, about what they have to offer the Pentagon.<\/p>\n<p>The publicly accessible files were found on the website of Alethia Labs, a nonprofit consultancy that helps the federal government with technology acquisition, and discovered by journalist <a target=\"_blank\" href=\"https:\/\/theintercept.com\/staff\/jack-poulson\/\" >Jack Poulson<\/a>. On Wednesday, Poulson <a href=\"https:\/\/jackpoulson.substack.com\/your-ai-is-your-rifle\"  target=\"_blank\" rel=\"noopener noreferrer\" aria-describedby=\"targetBlankDescription\">published a broader investigation<\/a> into the presentation materials. Alethia Labs has worked closely with the Pentagon to help it quickly integrate artificial intelligence tools into its arsenal, and since last year has contracted with the Pentagon\u2019s main AI office. The firm did not respond to a request for comment.<\/p>\n<p>One page of the Microsoft presentation highlights a variety of \u201ccommon\u201d federal uses for OpenAI, including for defense. One bullet point under \u201cAdvanced Computer Vision Training\u201d reads: \u201cBattle Management Systems: Using the DALL-E models to create images to train battle management systems.\u201d Just as it sounds, a battle management system is a command-and-control software suite that provides military leaders with a situational overview of a combat scenario, allowing them to coordinate things like artillery fire, airstrike target identification, and troop movements. The reference to computer vision training suggests artificial images conjured by DALL-E could help Pentagon computers better \u201csee\u201d conditions on the battlefield, a particular boon for finding \u2014 and annihilating \u2014 targets.<\/p>\n<p>In an emailed statement, Microsoft told The Intercept that while it had pitched the Pentagon on using DALL-E to train its battlefield software, it had not begun doing so. \u201cThis is an example of potential use cases that was informed by conversations with customers on the art of the possible with generative AI.\u201d Microsoft, which declined to attribute the remark to anyone at the company, did not explain why a \u201cpotential\u201d use case was labeled as a \u201ccommon\u201d use in its presentation.<\/p>\n<p>OpenAI spokesperson Liz Bourgeous said OpenAI was not involved in the Microsoft pitch and that it had not sold any tools to the Department of Defense. \u201cOpenAI\u2019s policies prohibit the use of our tools to develop or use weapons, injure others or destroy property,\u201d she wrote. \u201cWe were not involved in this presentation and have not had conversations with U.S. defense agencies regarding the hypothetical use cases it describes.\u201d<\/p>\n<p>Bourgeous added, \u201cWe have no evidence that OpenAI models have been used in this capacity. OpenAI has no partnerships with defense agencies to make use of our API or ChatGPT for such purposes.\u201d<\/p>\n<p>At the time of the presentation, OpenAI\u2019s policies seemingly would have prohibited a military use of DALL-E. Microsoft told The Intercept that if the Pentagon used DALL-E or any other OpenAI tool through a contract with Microsoft, it would be subject to the usage policies of the latter company. Still, any use of OpenAI technology to help the Pentagon more effectively kill and destroy would be a dramatic turnaround for the company, which describes its mission as developing safety-focused artificial intelligence that can benefit all of humanity.<\/p>\n<blockquote class=\"stylized pull-right\" data-shortcode-type=\"pullquote\" data-pull=\"right\"><p><em><strong>\u201cIt\u2019s not possible to build a battle management system in a way that doesn\u2019t, at least indirectly, contribute to civilian harm.\u201d<\/strong><\/em><\/p><\/blockquote>\n<p>\u201cIt\u2019s not possible to build a battle management system in a way that doesn\u2019t, at least indirectly, contribute to civilian harm,\u201d Brianna Rosen, a visiting fellow at Oxford University\u2019s Blavatnik School of Government who focuses on technology ethics.<\/p>\n<p>Rosen, who worked on the National Security Council during the Obama administration, explained that OpenAI\u2019s technologies could just as easily be used to help people as to harm them, and their use for the latter by any government is a political choice. \u201cUnless firms such as OpenAI have written guarantees from governments they will not use the technology to harm civilians \u2014 which still probably would not be legally-binding \u2014 I fail to see any way in which companies can state with confidence that the technology will not be used (or misused) in ways that have kinetic effects.\u201d<\/p>\n<p><iframe loading=\"lazy\" class=\"wp-block-document mb-5\" src=\"https:\/\/embed.documentcloud.org\/documents\/24538175-generative-ai-with-dod-data_microsoft\/?embed=1&amp;title=1\" width=\"100%\" height=\"450\" sandbox=\"allow-scripts allow-same-origin allow-popups allow-forms\" data-mce-fragment=\"1\"><\/iframe><\/p>\n<p><span class=\"has-underline\">The presentation document<\/span> provides no further detail about how exactly battlefield management systems could use DALL-E. The reference to training these systems, however, suggests that DALL-E could be to used to furnish the Pentagon with so-called synthetic training data: artificially created scenes that closely resemble germane, real-world imagery. Military software designed to detect enemy targets on the ground, for instance, could be shown a massive quantity of fake aerial images of landing strips or tank columns generated by DALL-E in order to better recognize such targets in the real world.<\/p>\n<p>Even putting aside ethical objections, the efficacy of such an approach is debatable. \u201cIt\u2019s known that a model\u2019s accuracy and ability to process data accurately deteriorates every time it is further trained on AI-generated content,\u201d said Heidy Khlaaf, a machine learning safety engineer who previously contracted with OpenAI. \u201cDall-E images are far from accurate and do not generate images reflective even close to our physical reality, even if they were to be fine-tuned on inputs of Battlefield management system. These generative image models cannot even accurately generate a correct number of limbs or fingers, how can we rely on them to be accurate with respect to a realistic field presence?\u201d<\/p>\n<p>In an<a href=\"https:\/\/www.csis.org\/analysis\/scaling-ai-enabled-capabilities-dod-government-and-industry-perspectives\"  target=\"_blank\" rel=\"noopener noreferrer\" aria-describedby=\"targetBlankDescription\"> interview last month<\/a> with the Center for Strategic and International Studies, Capt. M. Xavier Lugo of the U.S. Navy envisioned a military application of synthetic data exactly like the kind DALL-E can crank out, suggesting that faked images could be used to train drones to better see and recognize the world beneath them.<\/p>\n<p>Lugo, mission commander of the Pentagon\u2019s generative AI task force and member of the Department of Defense Chief Digital and Artificial Intelligence Office, is listed as a contact at the end of the Microsoft presentation document. The presentation was made by Microsoft employee Nehemiah Kuhns, a \u201ctechnology specialist\u201d working on the Space Force and Air Force.<\/p>\n<p>The Air Force is currently building the Advanced Battle Management System, its portion of a broader <a href=\"https:\/\/breakingdefense.com\/2022\/10\/jadc2-spending-is-sprawling-dod-should-keep-watch-but-let-it-go\/\"  target=\"_blank\" rel=\"noopener noreferrer\" aria-describedby=\"targetBlankDescription\">multibillion-dollar Pentagon project<\/a> called the Joint All-Domain Command and Control, which aims to network together the entire U.S. military for expanded communication across branches, AI-powered data analysis, and, ultimately, an improved capacity to kill. Through JADC2, as the project is known, the Pentagon <a href=\"https:\/\/www.defense.gov\/News\/News-Stories\/article\/article\/2427998\/joint-all-domain-command-control-framework-belongs-to-warfighters\/\"  target=\"_blank\" rel=\"noopener noreferrer\" aria-describedby=\"targetBlankDescription\">envisions<\/a> a near-future in which Air Force drone cameras, Navy warship radar, Army tanks, and Marines on the ground all seamlessly exchange data about the enemy in order to better destroy them.<\/p>\n<p>On April 3, U.S. Central Command <a href=\"https:\/\/defensescoop.com\/2024\/04\/03\/centcom-jadc2-deploy-minimum-viable-capability\/\"  target=\"_blank\" rel=\"noopener noreferrer\" aria-describedby=\"targetBlankDescription\">revealed<\/a> it had already begun using elements of JADC2 in the Middle East.<\/p>\n<p>The Department of Defense didn\u2019t answer specific questions about the Microsoft presentation, but spokesperson Tim Gorman told The Intercept that \u201cthe [Chief Digital and Artificial Intelligence Office\u2019s] mission is to accelerate the adoption of data, analytics, and AI across DoD. As part of that mission, we lead activities to educate the workforce on data and AI literacy, and how to apply existing and emerging commercial technologies to DoD mission\u00a0areas.\u201d<\/p>\n<p><span class=\"has-underline\">While Microsoft has<\/span> long reaped billions from defense contracts, OpenAI only recently acknowledged it would begin working with the Department of Defense. In response to The Intercept\u2019s <a target=\"_blank\" href=\"https:\/\/theintercept.com\/2024\/01\/12\/open-ai-military-ban-chatgpt\/\" >January report<\/a> on OpenAI\u2019s military-industrial about face, the company\u2019s spokesperson Niko Felix said that even under the loosened language, \u201cOur policy does not allow our tools to be used to harm people, develop weapons, for communications surveillance, or to injure others or destroy property.\u201d<\/p>\n<blockquote class=\"stylized pull-right\" data-shortcode-type=\"pullquote\" data-pull=\"right\"><p><em><strong>\u201cThe point is you\u2019re contributing to preparation for warfighting.\u201d<\/strong><\/em><\/p><\/blockquote>\n<p>Whether the Pentagon\u2019s use of OpenAI software would entail harm or not might depend on a literal view of how these technologies work, akin to arguments that the company that helps build the gun or trains the shooter is not responsible for where it\u2019s aimed or pulling the trigger. \u201cThey may be threading a needle between the use of [generative AI] to create synthetic training data and its use in actual warfighting,\u201d said Lucy Suchman, professor emerita of anthropology of science and technology at Lancaster University. \u201cBut that would be a spurious distinction in my view, because the point is you\u2019re contributing to preparation for warfighting.\u201d<\/p>\n<p>Unlike OpenAI, Microsoft has little pretense about forgoing harm in its \u201cresponsible AI\u201d document and <a href=\"https:\/\/blogs.microsoft.com\/on-the-issues\/2022\/05\/03\/artificial-intelligence-department-of-defense-cyber-missions\/\"  target=\"_blank\" rel=\"noopener noreferrer\" aria-describedby=\"targetBlankDescription\">openly promotes<\/a> the <a href=\"https:\/\/azure.microsoft.com\/en-us\/explore\/global-infrastructure\/government\/dod\"  target=\"_blank\" rel=\"noopener noreferrer\" aria-describedby=\"targetBlankDescription\">military use<\/a> of its <a href=\"https:\/\/www.nytimes.com\/2018\/10\/26\/us\/politics\/ai-microsoft-pentagon.html\"  target=\"_blank\" rel=\"noopener noreferrer\" aria-describedby=\"targetBlankDescription\">machine learning tools<\/a>.<\/p>\n<p>Following its policy reversal, OpenAI was also quick to emphasize to the public and business press that its collaboration with the military was of a defensive, peaceful nature. In a January interview at Davos responding to The Intercept\u2019s reporting, OpenAI vice president of global affairs Anna Makanju <a href=\"https:\/\/www.bloomberg.com\/news\/articles\/2024-01-16\/openai-working-with-us-military-on-cybersecurity-tools-for-veterans\"  target=\"_blank\" rel=\"noopener noreferrer\" aria-describedby=\"targetBlankDescription\">assured<\/a> panel attendees that the company\u2019s military work was focused on applications like cybersecurity initiatives and veteran suicide prevention, and that the company\u2019s groundbreaking machine learning tools were still forbidden from causing harm or destruction.<\/p>\n<p>Contributing to the development of a battle management system, however, would place OpenAI\u2019s military work far closer to warfare itself. While OpenAI\u2019s claim of avoiding direct harm could be technically true if its software does not directly operate weapons systems, Khlaaf, the machine learning safety engineer, said, its \u201cuse in other systems, such as military operation planning or battlefield assessments\u201d would ultimately impact \u201cwhere weapons are deployed or missions are carried out.\u201d<\/p>\n<p>Indeed, it\u2019s difficult to imagine a battle whose primary purpose isn\u2019t causing bodily harm and property damage. An Air Force press release from March, for example, <a href=\"https:\/\/www.spaceforce.mil\/News\/Article-Display\/Article\/3699914\/daf-delivers-lethality-at-the-speed-of-data-during-project-convergence-capstone\/\"  target=\"_blank\" rel=\"noopener noreferrer\" aria-describedby=\"targetBlankDescription\">describes<\/a> a recent battle management system exercise as delivering \u201clethality at the speed of data.\u201d<\/p>\n<p>Other materials from the AI literacy seminar series make clear that \u201charm\u201d is, ultimately, the point. A slide from a welcome presentation given the day before Microsoft\u2019s asks the question, \u201cWhy should we care?\u201d The answer: \u201cWe have to kill bad guys.\u201d In a nod to the \u201cliteracy\u201d aspect of the seminar, the slide adds, \u201cWe need to know what we\u2019re talking about\u2026 and we don\u2019t yet.\u201d<\/p>\n<p><strong>Update: April 11, 2024<br \/>\n<\/strong><em>This article was updated to clarify Microsoft\u2019s promotion of its work with the Department of Defense.<\/em><\/p>\n<p>_______________________________________________<\/p>\n<p style=\"padding-left: 40px;\"><em><a target=\"_blank\" href=\"https:\/\/theintercept.com\/staff\/sambiddle\/\" ><img loading=\"lazy\" decoding=\"async\" class=\"alignleft size-full wp-image-89314\" src=\"https:\/\/www.transcend.org\/tms\/wp-content\/uploads\/2017\/03\/sam-biddle-staff-e1492275425120.jpg\" alt=\"\" width=\"100\" height=\"100\" \/><\/a><\/em><em><a target=\"_blank\" href=\"https:\/\/theintercept.com\/staff\/sambiddle\/\" >Sam Biddle <\/a><\/em><br \/>\n<em><a href=\"mailto:sam.biddle@theintercept.com\" data-module=\"AuthorEmail\" data-module-uid=\"bab21ab7-314c-4ee4-a07c-cf76b9d7c4ca\">sam.biddle@theintercept.com <\/a><\/em><br \/>\n<em>@sambiddle.29 <\/em><em>on Signal<\/em><br \/>\n<em><a target=\"_blank\" href=\"https:\/\/bsky.app\/profile\/sambiddle.bsky.social\"  aria-describedby=\"targetBlankDescription\">@sambiddle.bsky.social <\/a><\/em><em>on Bluesky<\/em><br \/>\n<em><a target=\"_blank\" href=\"https:\/\/twitter.com\/samfbiddle\/\"  aria-describedby=\"targetBlankDescription\">@samfbiddle <\/a><\/em><em>on X<\/em><\/p>\n<p>&nbsp;<\/p>\n<p style=\"text-align: left;\"><a target=\"_blank\" href=\"https:\/\/theintercept.com\/2024\/04\/10\/microsoft-openai-dalle-ai-military-use\/?utm_medium=email&amp;utm_source=The%20Intercept%20Newsletter\" >Go to Original &#8211; theintercept.com<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p>10 Apr 2024 &#8211; Microsoft last year proposed using OpenAI\u2019s image generation tool, DALL-E, to help the DOD build software for military operations months after OpenAI ended its prohibition against military work. Contradiction: OpenAI describes its mission as developing AI that can benefit all of humanity.<\/p>\n","protected":false},"author":4,"featured_media":259480,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[3078],"tags":[1733,3261,1877,3114,3262,112,70],"class_list":["post-259479","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-artificial-intelligence-ai","tag-artificial-intelligence-ai","tag-dall-e","tag-microsoft","tag-militarism-and-ai","tag-openai","tag-pentagon","tag-usa"],"_links":{"self":[{"href":"https:\/\/www.transcend.org\/tms\/wp-json\/wp\/v2\/posts\/259479","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.transcend.org\/tms\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.transcend.org\/tms\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.transcend.org\/tms\/wp-json\/wp\/v2\/users\/4"}],"replies":[{"embeddable":true,"href":"https:\/\/www.transcend.org\/tms\/wp-json\/wp\/v2\/comments?post=259479"}],"version-history":[{"count":3,"href":"https:\/\/www.transcend.org\/tms\/wp-json\/wp\/v2\/posts\/259479\/revisions"}],"predecessor-version":[{"id":259485,"href":"https:\/\/www.transcend.org\/tms\/wp-json\/wp\/v2\/posts\/259479\/revisions\/259485"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.transcend.org\/tms\/wp-json\/wp\/v2\/media\/259480"}],"wp:attachment":[{"href":"https:\/\/www.transcend.org\/tms\/wp-json\/wp\/v2\/media?parent=259479"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.transcend.org\/tms\/wp-json\/wp\/v2\/categories?post=259479"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.transcend.org\/tms\/wp-json\/wp\/v2\/tags?post=259479"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}