The AI Conundrum: The Peace Movement’s Next Big Challenge

ARTIFICIAL INTELLIGENCE-AI, 2 Oct 2023

Tom Valovic | Mass Peace Action - TRANSCEND Media Service

Illustration: geralt/pixabay

15 Sep 2023 – It’s hard to watch or read the news these days without bumping into the latest breathless glorification of artificial intelligence. For those of us who believe that some sort of technocratic takeover of many government functions is underway, this trend is deeply concerning. AI’s use in politics and government has wide ranging implications for the future of democracy. For present purposes, I’m diving into what I think will be a top-of-mind issue for the peace movement going forward — i.e., how to deal with AI as a powerful enhancement to military arsenals and strategies with mostly unknown consequences.

To get a feel for some perspectives on AI in a military context, it’s useful to look at this article from Wired magazine, July 2023. The article “The AI-Powered, Totally Autonomous Future of War Is Here” practically gushed with excitement about autonomous warfare using AI. Its message: robot ships and other autonomous large-scale weapons are already here–and being employed in various military exercises.

The application of AI to autonomous systems represents a quantum leap from the immediate and often devastating impacts of modern weaponry. The accelerated use of drone warfare promoted by President Obama and continuing today introduced a major shift towards a kind of abstraction separating us from the immediate human consequences of military initiatives. With this “advance” in high tech weaponry, targeted individuals 6 or 7 thousand miles away have been killed by operators sitting comfortably behind computer screens, almost as if they were engaged in some sort of gaming experience– a marked departure from the very notion of how warfare is conducted. The more layers of abstraction we add to military operations, of course, the further away we get from their actual and often terrifying human consequences.

What Exactly Are Autonomous Weapons?

“Autonomous weapons systems” is a very broad term and the DoD’s complex labyrinth of acronym-laden nomenclature makes it extremely difficult to grasp the technological aspects of various systems. The semantic obfuscation, of course, is intentional, but moving horrific human impacts to the realm of detached abstraction is also indicative of a disturbing mindset. In any event, consistent use and understanding of terms is important for shaping policy going forward.

The term “autonomous” can and has been applied to killer robots and that’s certainly apt. But in addition to specifying particular stand-alone autonomous weapons, there’s a broader usage applying to the overarching management systems used to control military assets in the field. This usage, rather alarmingly, includes systems employing nuclear weapons, whose involvement is where AI development is intentionally headed. An excellent article by Michael Klare in The Nation, explains: “In its budget submission for 2023…the Air Force requested $231 million to develop the Advanced Battlefield Management System (ABMS), a complex network of sensors and AI-enabled computers designed to collect and interpret data on enemy operations and provide pilots and ground forces with a menu of optimal attack options. As the technology advances, the system will be capable of sending ‘fire’ instructions directly to ‘shooters,’ largely bypassing human control.”

Klare went on to discuss the future development that DoD has planned and that weapons manufacturers like Raytheon (aka RTX) have been working on for some time. In the next phase, Klare says, ABMS systems will be integrated with the entire spectrum of US combat forces in what’s called the Joint All-Domain Command-and-Control System (JADC2). JADC2 is planned to go well beyond mere data collection and enter the realm of actual decision-making by recommending the best weapons, including nuclear weapons, to be applied in battle. Klare points out the obvious dangers: “Imagine then the JADC2 ordering the intense bombardment of enemy bases and command systems in China itself, triggering reciprocal attacks on US facilities and a lightning decision by JADC2 to retaliate with tactical nuclear weapons, igniting a long-feared nuclear holocaust.”

Apple’s Cupertino headquarters; the tech giant is richer than 96% of the globe.
(Photo: Carles Rabada on Unsplash)

The “AI Takeover” Has Been Long Planned

There’s a widespread perception that AI is a recent development coming out of the high-tech sector– the misleading picture frequently painted by the mainstream media. The reality is that AI development has been an ongoing investment on the part of the federal government for decades. According to a report from the Brookings Institution, in order to advance an AI arms race between the US and China, the federal government serves as an incubator for AI projects. The umbrella project for this undertaking is called the National AI Initiative; its self-stated goal is to advance U.S. leadership in AI through the National AI Initiative Act of 2020, which became law in January 2021.

The law’s purpose is to “accelerate AI research and application for the Nation’s economic prosperity and national security” (emphasis added). This national AI program has been overseen by a surprising number of government agencies, including and especially defense-related ones. They include but are not limited to government alphabet soup agencies like DARPA, DoD, NASA, NIH, IARPA, DOE, Homeland Security and the State Department. In this collaborative AI program, the federal government provides seed money for AI R&D and creates pathways for development in the private sector and for research in the university community. Collaboration between the tech-oriented universities and the DoD has increased over the last few decades. As an example, MIT has been sponsoring an organization called the Institute for Soldier Nanotechnologies since 2003.

DARPA and other dark military-oriented organizations are very much on the ground floor of these efforts, generally related to our self-declared competition with China. Technology is power and at the end of the day, many of these tech-driven initiatives can be seen as a kind of power struggle in an increasingly technocratic world.  From this particular mindset, whoever has the best AI systems will gain not only technological and economic superiority but also military dominance. The COO of Open AI, the company that created ChatGPT, openly admitted in Time magazine that government funding has been the main driver of AI development for many years.

Policy Options for the Peace Movement

While there has been some deep and intense public discussion about the societal impacts of AI, much of it has been sourced from the corporate sector itself, which is trying to get out in front of AI’s “PR problem”. Going forward, it’s unlikely that the corporate sector or government agencies trying to capitalize on decades of heavy investment will provide objective assessments of the threat levels involved. However, important clues surface from time to time in public policy thrashes. For example, Sam Altman, the CEO of OpenAI has said he considers the threat level of AI to be on a par with both nuclear war and pandemics. As deeply concerning as all three might be in worst-case scenarios when considered separately, how much greater might the threat be when AI is combined with our ever-growing nuclear threat?

Because industry, already in bed with government agencies on AI development, can’t be trusted to provide objective assessments, the task of adequately assessing the military and defense-oriented risk of AI is going to fall squarely on the shoulders of both the peace movement and any non-profit organizations not associated with or receiving funding from industry (although these appear to be few in number). Moreover, after risks are assessed, realistic policies will need to be formulated—a challenging task given both the technological complexity involved and the lack of transparency among the various players. Nevertheless, it’s critically important that peace organizations address these issues quickly and proactively while applying the requisite wisdom to separate realistic asks from entreaties that seem like wishful or magical thinking and therefore run the risk of being shunted aside or ignored outright.

What then might constitute a reasonable and effective policy for the peace movement regarding AI and its military uses? Given how pervasive the AI onslaught already is in politics, education, economics, and culture, it appears unrealistic to suggest that the military abandon AI altogether. Targeting killer robots is certainly important. Partnering with Stop Killer Robots, a European organization dedicated to this work, might be strategically useful. (Somewhat amusingly, if you sign up for their emails, you have to check the frequently seen box that says “I am not a robot”.)  Partnering with other allied movements, including civil liberties, human rights, and arms control groups (such as the Arms Control Association), which already have initiatives underway, may also prove useful.

The harder task will be to design policy positions for the previously described broad AI command and control platforms used to surveil and direct the actions of large-scale autonomous systems and the weapons they control. Much thought and discussion will be needed to sift through this labyrinth of technological complexity. Peace activists who have sufficient expertise in the deeper aspects of AI operation and military plans must be enlisted to ensure that policies are technologically sound and have real-world feasibility. I suggest as a starting point that the most obvious and important policy option might be to demand that AI systems be decoupled from the nuclear war apparatus. This will not be an easy process. The clock is ticking and we’re all playing catch up in this complicated and often confusing progression of events.

_______________________________________________

Tom Valovic is a journalist and the author of Digital Mythologies (Rutgers University Press), has served as a consultant to the former Congressional Office of Technology Assessment, and was  editor-in-chief of Telecommunications magazine where he was the first journalist to report on the advent of the public Internet. Tom is a longtime member of MAPA and currently a member of several working groups.

Massachusetts Peace Action began in the 1980s as Massachusetts Freeze, an affiliate of the nationwide Nuclear Weapons Freeze Campaign. In the late 1980s, Mass Freeze joined the national Freeze Campaign in its merger with national SANE: The Committee for a Sane Nuclear Policy.  In 1993, we became Massachusetts Peace Action. Since then, we have continued our disarmament work while broadening our focus to include advocating for non-military solutions for international issues, educating voters on peace issues, and more.

Go to Original – masspeaceaction.org


Tags: , , , , ,

Share this article:


DISCLAIMER: The statements, views and opinions expressed in pieces republished here are solely those of the authors and do not necessarily represent those of TMS. In accordance with title 17 U.S.C. section 107, this material is distributed without profit to those who have expressed a prior interest in receiving the included information for research and educational purposes. TMS has no affiliation whatsoever with the originator of this article nor is TMS endorsed or sponsored by the originator. “GO TO ORIGINAL” links are provided as a convenience to our readers and allow for verification of authenticity. However, as originating pages are often updated by their originating host sites, the versions posted may not match the versions our readers view when clicking the “GO TO ORIGINAL” links. This site contains copyrighted material the use of which has not always been specifically authorized by the copyright owner. We are making such material available in our efforts to advance understanding of environmental, political, human rights, economic, democracy, scientific, and social justice issues, etc. We believe this constitutes a ‘fair use’ of any such copyrighted material as provided for in section 107 of the US Copyright Law. In accordance with Title 17 U.S.C. Section 107, the material on this site is distributed without profit to those who have expressed a prior interest in receiving the included information for research and educational purposes. For more information go to: http://www.law.cornell.edu/uscode/17/107.shtml. If you wish to use copyrighted material from this site for purposes of your own that go beyond ‘fair use’, you must obtain permission from the copyright owner.

Comments are closed.