How Artificial Is Human Intelligence — and Humanity?

TRANSCEND MEMBERS, 6 Nov 2023

Anthony Judge | Laetus in Praesens - TRANSCEND Media Service

Consideration of AI Safety versus Safety from Human Artifice

Introduction

6 Nov 2023 – Much is now made of the potential dangers of artificial intelligence and the urgent need for its regulation (Yoshua Bengio, et al, , “Large-scale Risks from Upcoming, Powerful AI Systems”: managing AI risks in an era of rapid progress, Global Research, 25 October 2023). Media have focused on the recent assertion by Henry Kissinger in the light of his involvement in the World Economic Forum (AI Will Replace Humans within 5 Years, Slay, 28 October 2023).

Such are purported to be the dangerous consequences for the world that the UK government has urgently hosted an AI Safety Summit of leaders in the field at the iconic location of computer innovation in response to the threat of World War III (BBC, 28 October 2023). The Summit gave rise to the so-called Bletchley Declaration, reinforced by a statement by the UN Secretary-General (Statement at the UK AI Safety Summit, United Nations, 2 November 2023). It is unclear whether any form of AI was used there to enhance the quality of discourse typical of such events (Use of ChatGPT to Clarify Possibility of Dialogue of Higher Quality, 2023).

Failure to enhance the quality of interaction at such events raises the question as to whether they could be appropriately caricatured as “large language models” of an outmoded kind — and dangerous as such — with their own variant of the “hallucination” deprecated as a characteristic of AIs.

At the same time, the world is confronted by the unrestrained response by Israel to the attack from Gaza, and the many human casualties which are expected to result. However it is perceived, this is clearly the consequence of the application of human intelligence. In the case of the Israelis, their relative intelligence is widely recognized, if not a matter of ethnic pride (Nelly Lalany, Ashkenazi Jews rank smartest in world: studies show descendants of Jews from Medieval Germany, throughout Europe have IQ 20% higher than global average, Ynet News, 23 July 2011; Bret Stephens, The Secrets of Jewish Genius, The New York Times, 27 December 2019; Sander L. Gilman, Are Jews Smarter Than Everyone Else? Mens Sana Monographs,  6, 2008, 1; Kiryn Haslinger, A Jewish Gene for Intelligence? Scientific American, 1 October 2005).

The concern in what follows is how to distinguish, if that is possible, between the much publicized dangers of AI and those deriving from “human artifice”. The nature of human artifice, and its dangerous consequences, has become confused by the focus on artificial intelligence. It is however clear that many global crises are the consequences of human artifice — in the absence of any use of AI. T

he Anthropocene Era might well be explored in such terms. AI as a safety concern is a latecomer to the scene — itself a consequence of human artifice. The currently acclaimed urgency of the crisis posed by AI can even be seen as displacing the urgency previously accorded to climate change. The apparent shift of urgency to a strategic focus on the dangers of AI could even be seen as a convenience — in the absence of viable responses to climate change.

Given the current multiplicity of global crises — a polycrisis — the seemingly absurd consequences of human artifice merit particular concern by comparison with those potentially to be engendered by artificial intelligence. The nature of intelligence has however long been a matter of debate, especially in the light of the search for extraterrestrial intelligence. Having engendered so many crises, it might even be provocatively asked whether “human intelligence”, as acclaimed, will be appreciated as such by the future (Quest for Intelligent Life on Earth — from a Future Perspective, 2023). However it might also be asked — speculatively — whether humanity is endowed with some form of “indwelling” intelligence, relevant to crisis response, elusive though it may be (Implication of Indwelling Intelligence in Global Confidence-building, 2012).

The particular concern in what follows is whether what is appreciated as “human intelligence” has progressively acquired characteristics which might be deprecated as “artificial”. How artificial has “humanity” become? What indeed is the distinction between “artificial” and “artifice”? How artificial is one’s intelligence as a human being?

How is the distinction to be made between the “artificiality” of an agent of an organization representing humanity (or an expert representing a domain of expertise) and the “humanity” of that agent or discipline? Most obviously the question applies to those gathered at the AI Safety Summit or to those charged with regulating AI in the future.

Some such progressive artificiality is to be expected as a form of human adaptation to an increasingly artificial environment and the skills required to survive within it. The adaptation to AI might well be understood in terms of the acquisition of features which are characteristic of AI — and of its dangers to humanity. Dangers held to be associated with the technology of “artificial intelligence” then merit exploration as deriving from the unacknowledged projection of the artificiality increasingly associated with human intelligence.

This projection could be seen as an instance of misplaced concreteness — the fallacy of reification. The dangers perceived in the technology are then to be understood as driven — to some degree — by questionable patterns of thinking. Ironically this understanding might be supported by insights into cognitive development from disciplines upholding perspectives which are the antithesis of those driving the development of AI technology. It is in this sense that the surprisingly extensive literature on AI from a variety of religious perspectives merits attention — especially in the light of the challenge to ethics and morality now highlighted with respect to AI development.

The following exploration includes presentation of the challenge to ChatGPT as exemplifying an “interested party”. That exchange helps to clarify the distinction which would seem to be of value at this time.

TO CONTINUE READING Go to Original – laetusinpraesens.org


Tags: , ,

Share this article:


DISCLAIMER: The statements, views and opinions expressed in pieces republished here are solely those of the authors and do not necessarily represent those of TMS. In accordance with title 17 U.S.C. section 107, this material is distributed without profit to those who have expressed a prior interest in receiving the included information for research and educational purposes. TMS has no affiliation whatsoever with the originator of this article nor is TMS endorsed or sponsored by the originator. “GO TO ORIGINAL” links are provided as a convenience to our readers and allow for verification of authenticity. However, as originating pages are often updated by their originating host sites, the versions posted may not match the versions our readers view when clicking the “GO TO ORIGINAL” links. This site contains copyrighted material the use of which has not always been specifically authorized by the copyright owner. We are making such material available in our efforts to advance understanding of environmental, political, human rights, economic, democracy, scientific, and social justice issues, etc. We believe this constitutes a ‘fair use’ of any such copyrighted material as provided for in section 107 of the US Copyright Law. In accordance with Title 17 U.S.C. Section 107, the material on this site is distributed without profit to those who have expressed a prior interest in receiving the included information for research and educational purposes. For more information go to: http://www.law.cornell.edu/uscode/17/107.shtml. If you wish to use copyrighted material from this site for purposes of your own that go beyond ‘fair use’, you must obtain permission from the copyright owner.

Comments are closed.