Just War Theory as an Inspiration for “Just AI Theory”?

TRANSCEND MEMBERS, 8 Jan 2024

Anthony Judge | Laetus in Praesens - TRANSCEND Media Service

In Quest of a Robust Ethical Framework for AI Aided by ChatGPT

Introduction

8 Jan 2024 – There is considerable concern regarding the dangers associated with the development and use of artificial intelligence (Joshua Rothman, Why the Godfather of A.I. Fears What He’s Built, The New Yorker, 13 November 2023; V. N. Alexander (2023: The Year of the ChatGPT Scare, Off-Guardian, 29 December 2023). It could be readily concluded that 95% of the media response to AI has been fear-mongering, especially by those who have little appreciation of its potential. As described by Alexander with regard to the founders of the Center for Humane Technology:

Although they aren’t worried that AI is conscious or alive, they do worry that AI will be used to make people fight online, to spread disinformation and propaganda, to help bad people make bioweapons or chemical weapons, or to disseminate unreliable information thereby destroying trust in our institutions. Harris and Raskin don’t seem to have noticed that virtually all world governments, their side-kick NGOs, and Big Industry are already doing all of the above, all of the time.

The concerns have resulted in the articulation by the President of the United States of an Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence, The White House, 30 October 2023). Its opening section devotes a single sentence to recognition that:

Responsible AI use has the potential to help solve urgent challenges while making our world more prosperous, productive, innovative, and secure.

The remainder of that section, and the document, continues with the preoccupation:

At the same time, irresponsible use could exacerbate societal harms such as fraud, discrimination, bias, and disinformation; displace and disempower workers; stifle competition; and pose risks to national security.  Harnessing AI for good and realizing its myriad benefits requires mitigating its substantial risks.  This endeavor demands a society-wide effort that includes government, the private sector, academia, and civil society.

The United Nations, through the International Telecommunications Union (as its Specialized Agency) organized an AI for Good Global Summit in partnership with 40 UN sister agencies in 2023. The event appears to have made little use of AI in enhancing the dynamics of summitry — if only as a prelude to the organization of the later COP28 United Nations Climate Change Conference, now recognized to have been fruitless. This raises the question as to how the UN’s planned Summit of the Future (2024) will be organized to transcend the long-evident inadequacies of international summitry.

The very extensive Executive Order (the longest in history) includes the following sections, all of which are defensive in tone, if not exclusively so:

  • Ensuring the Safety and Security of AI Technology.
  • Promoting Innovation and Competition
  • Supporting Workers
  • Advancing Equity and Civil Rights.
  • Protecting Consumers, Patients, Passengers, and Students
  • Protecting Privacy
  • Advancing Federal Government Use of AI
  • Strengthening American Leadership Abroad.

There is little trace of how AI might be of value in a time of global crisis in response to the global challenges of government. It is ironic that this major initiative, resulting in the establishment of a White House Artificial Intelligence Council, occurs in period in which the self-acclaimed leader of the the free world is increasingly held to be complicit in genocide (US President Biden sued for ‘complicity’ in Israel’s ‘genocide’ in Gaza, Al Jazeera, 14 November 2023; Emily Prey and Azeem Ibrahim, The United States Must Reckon With Its Own Genocides, Foreign Policy, 11 October 2021). Such complicity is recognized as extendng to its major allies, especially those with a problematic colonial history (Marc Parry, Uncovering the brutal truth about the British empire, The Guardian, 18 August 2016; More evidence of ‘genocidal killings’ of Aboriginal people in frontier times, Australian Broadcasting Corporation, 16 March 2022) .

The remarkable capacity of AI with respect to strategic thinking has been extensively documented in relation to its innovative ability with respect to strategic games, most notably chess and go (John Menick, Move 37: Artificial Intelligence, Randomness, and Creativity, Mousse Magazine, 55 + 53, 2016). There is very little commentary on how this might be adapted to the resolution of global crises and territorial conflicts, if only in terms of insightful simulation (Simulating the Israel-Palestine Conflict as a Strategy Game, 2023). There is seemingly a cultivated indifference to the possibility that AI might be used to engender an unforeseen solution to the intractable conflicts of Russia-Ukraine, Israel-Palestine, China-Taiwan, or the Koreas. Rather the focus is on how either side might use AI to achieve total advantage over the other — and of how this might be prevented.

More generally, there is little reference to the manner in which the quality of problem-solving and decision-making might be enhanced (Artificial Intelligence as an Aid to Thinking Otherwise — but to what End? 2023; Yash Sharma, Enhancing Critical Thinking with AI: the power of framed questioning, 29 May 2023). How indeed might AI be used as a “cognitive exoskeleton” for more fruitful ends than those envisaged by the security services? An early vision in this respect is recognized as having been framed by Douglas Engelbart (Toward augmenting the human intellect and boosting our collective IQ, Communications of the ACM, 38, 1995, 8). By contrast, the focus has been rather on the application of AI in the extension of the problematic strategies of security agencies, as exemplified by the case of Palantir according to Binoy Kampmark (Amoral Compass: “Create and Govern Artificial Intelligence”, Global Research, 30 December 2023; AI giant Palantir on a quest to help the West, Green Left, 31 December 2023, 1397).

The focus here on values follows from an earlier Human Values Project as part of the online Encyclopedia of World Problems and Human Potential. Through their organization into value polarities (230), this addressed the difficulty in handling the labelling ambiguity of sets of constructive values (987) and destructive values (1992). The conventional labelling of virtues and sins offers a particular example of this — especially in a multicultural global context. This understanding of axiological polarity is contrasted with that of Aristotle’s Nicomachean Ethics in which 12 virtues are each associated at the (golden) mean between their excess and deficiency. The preoccupation was later developed as Values, Virtues and Sins of a Viable Democratic Civilization (2022).

The focus on AI ethics in what follows is used to clarify the more general question as to what makes for a robust set of values in contrast with a set which is fragile or ineffectual — and possibly dangerously so (especially when claims are made to the contrary). Of particular interest are differences in the psychosocial implications for memorable engagement with principles, values and virtues, and goals — as constructs– in contrast with the policy directives and behaviours through which they may be understood in practice, as partly explored separately (Being Spoken to Meaningfully by Constructs, 2023).

With regard to AI ethics, the following exercise explores the insights to be derived from extensive engagement with ChatGPT (Version 4.0) as an “interested party” — following the experimental method previously adopted (Artificial Intelligence as an Aid to Thinking Otherwise — but to what End? 2023). In addition to the reservations noted then, the theme explored appeared to evoke responses of a somewhat different style, seemingly more constrained and succinct, and less proactive. It seems to be characterized, as might be expected, by a reversion to the default forms of “management speak” typical of international institutional response to issues with ethical implications. Given the contnuing development of ChatGPT, it is possible that this resulted from “tweaking” of particular algorithms in the intervening period — a reminder of how these may be crafted behind the scenes to particular ends, notably those of “Big Brother” (and despite any requirements for “transparency”).

Given the questionable development of “international ethics“, ChatGPT proved surprisingy useful in experimental “consolidation” of a number of international human rights charters to enable the polyhedral configuration of “rights” as a potential exemplification of systemic integrity. In the absence of the ethical framework envisaged by the Parliament of the World’s Religions as a “global ethic“, the approach was used in a preliminary exploration of its relevance to a robust configuration of the ethics of AI development.

In this light it may then be asked whether Just War Theory constitutes a robust ethical framework — given global dependence on the guidelines it offers in practice. The argument concludes with a discussion of the cognitive challenge of engagement with a meaningful construct like a principle, value, or goal.

Just AI theory and Just suffering theory?

TO CONTINUE READING Go to Original – laetusinpraesens.org


Tags: , , ,

Share this article:


DISCLAIMER: The statements, views and opinions expressed in pieces republished here are solely those of the authors and do not necessarily represent those of TMS. In accordance with title 17 U.S.C. section 107, this material is distributed without profit to those who have expressed a prior interest in receiving the included information for research and educational purposes. TMS has no affiliation whatsoever with the originator of this article nor is TMS endorsed or sponsored by the originator. “GO TO ORIGINAL” links are provided as a convenience to our readers and allow for verification of authenticity. However, as originating pages are often updated by their originating host sites, the versions posted may not match the versions our readers view when clicking the “GO TO ORIGINAL” links. This site contains copyrighted material the use of which has not always been specifically authorized by the copyright owner. We are making such material available in our efforts to advance understanding of environmental, political, human rights, economic, democracy, scientific, and social justice issues, etc. We believe this constitutes a ‘fair use’ of any such copyrighted material as provided for in section 107 of the US Copyright Law. In accordance with Title 17 U.S.C. Section 107, the material on this site is distributed without profit to those who have expressed a prior interest in receiving the included information for research and educational purposes. For more information go to: http://www.law.cornell.edu/uscode/17/107.shtml. If you wish to use copyrighted material from this site for purposes of your own that go beyond ‘fair use’, you must obtain permission from the copyright owner.

There are no comments so far.

Join the discussion!

We welcome debate and dissent, but personal — ad hominem — attacks (on authors, other users or any individual), abuse and defamatory language will not be tolerated. Nor will we tolerate attempts to deliberately disrupt discussions. We aim to maintain an inviting space to focus on intelligent interactions and debates.

× 2 = 14

Note: we try to save your comment in your browser when there are technical problems. Still, for long comments we recommend that you copy them somewhere else as a backup before you submit them.

This site uses Akismet to reduce spam. Learn how your comment data is processed.