Artificial Emotional Intelligence and Its Human Implications

TRANSCEND MEMBERS, 8 May 2023

Anthony Judge | Laetus in Praesens - TRANSCEND Media Service

Dumbing Down or Eliciting a Higher Order of Authenticity and Subtlety in Dialogue

Introduction

8 May 2023 – There is currently no lack of references to the major future impacts of artificial intelligence on global civilization at every level. Some of these are anticipated with concern, especially with warnings of how AI is likely to be misused to undermine valued social processes and employment, whether deliberately or inadvertently (AI ‘godfather’ Geoffrey Hinton warns of dangers as he quits Google, BBC, 3 May 2023; Will Douglas Heaven, Geoffrey Hinton tells us why he’s now scared of the tech he helped build, MIT Technology Review, 2 May 2023).

Arguably there is now a state of official panic at the foreseeable impacts of AI (White House: Big Tech bosses told to protect public from AI risks, BBC, 5 May 2023). Many now recognize the possibility that they may soon be outsmarted by AI. Key figures in artificial intelligence want training of powerful AI systems to be suspended amid fears of a threat to humanity (Pause Giant AI Experiments: An Open Letter, Future of Life Instute, 22 March 2023).

Seemingly at issue is whether society systematically cultivates intellectual mediocrity in order to avoid engaging with higher orders of intelligence and modes of discourse. Ironically this is a scenario consistent with one explanation of the Fermi paradox. Given the challenges of global governance, such a choice could be usefully explored in the light of the arguments of Jared Diamond (Collapse: How Societies Choose to Fail or Succeed, 2005). The arguments of Thomas Homer-Dixon regarding the final constraints on the Roman Empire from energy resources are also of relevance — by substituting collective intelligence for energy (The Upside of Down: catastrophe, creativity, and the renewal of civilization, 2006).

The possibilities have long invited speculation in science fiction as characteristic of dystopia, rather than the utopia on which techno-optimists are uncritically focused (George Orwell, Nineteen Eighty-Four, 1949). It can also be speculated that AI is already in use to curate the mainstream discourse through which global strategy is increasingly curated (Governance of Pandemic Response by Artificial Intelligence, 2021). That argument explored the extent to which human agents might have been unconsciously controlled through the AI-elaboration of communication scripts.

The main emphasis with respect to AI is of course in relation to conventional understandings of intelligence, dramatically highlighted by the capacity to defeat humans in games that have epitomised that intelligence, namely chess and go. The most recent developments focus on the use of large language models through which AI learning is enabled. These have now reached a remarkable stage through widespread access to applications like ChatGPT (developed by OpenAI) to which an extremely wide variety of questions may be addressed for a variety of purposes. Some are already deprecated to the extent of engendering restrictive measures (Rapid growth of ‘news’ sites using AI tools like ChatGPT is driving the spread of misinformation, Euronews, 2 May 2023).

In contrast with conventional understandings of intelligence, attention has focused to a less evident degree on emotional intelligence (Daniel Goleman, Emotional Intelligence, 1995). Whereas it is common to rate individuals in terms of their IQ, it is relatively rare to encounter references to individuals with high emotional (EQ). Indeed there is little understnding of what this might mean in practice, although the capacity of some individuals to skillfully manipulate their relations with other is acknowledged — whether to mutual benefit or in support of some other agenda. The ability of some to “sell” an idea or product — through unusual persuasive skills — is readily recognized. These skills are seemingly  unrelated to AI.

The question of how and when AI (as conventionally understood) might develop skills of artificial emotional intelligence (AEI) is now actively researched. AEI is considered a “subset” of AI. Concerns about the development of AI tend to refer to AEI only indirectly by allusion — if at all. The concern in what follows is to highlight some of the issues which are seemingly neglected with respect to AEI. In contrast to the challenge to humans of AI — and the point at which AI might significantly exceed human capacities — the challenge to human emotional capacities can be understood otherwise.

The issues relating to AEI are fundamental to the currently envisaged development of information warfare as psychological warfare — into memetic warfare and cognitive warfare, notably in support of noopolitics (John Arquilla and David Ronfeldt, The Emergence of Noopolitik: toward an American information strategy, RAND Corporation, 1999). This is especially the case with the diminishing significance of facts in relation to assertive declarations by authorities through the media — namely the development of a “facit reality” enabled by higher orders of persuasion.

A distinctive approach to the artificiality of emotional intelligence noted here is the manner in which many training courses and programs for humans are focused on some form of behaviour modification held to be of value in engaging with others — however “false” the result may be sensed to be. These range from hospitality programs through to finishing schools and the formalities of etiquette. They may be framed as personal development, even in relation to a spiritual agenda — possibly to facilitate the proselytizing of a missionary agenda. The approach may be recognized and deprecated as brainwashing — as in cults and in the experimentation on prisoners in Guantanamo Bay. The techniques of persuasion are most notably evident in the training of sales personnel. They may well be cultivated as a feature of “grooming” in its most deprecated sense.

The question here is to what extent AEI development will be informed by the traditions and practices of such programs. From another perspective it may also be asked to what extent these pre-AI programs constitute the cultivation of artificial emotional intelligence in their own right. Will the skilled emotionally sensitive responses of an AEI become recognized as superior to those of a human being — or indistinguishable from those of human being — or inherently “false”? The possibility of such distinction in the case of intelligence is framed by the Turing test, raising the question of how the authenticity of interaction of an AEI will be rated in relation to that of a human being (Manh-Tung Ho, What is a Turing test for emotional AI? AI and Society, 2022; Arthur C. Schwaninger, The Philosophising Machine: a specification of the Turing test, Philosophia, 50, 2022). The question is readily evident with respect to the authenticity of responses of personnel in the hospitality industry. The issue will be particularly evident in the case of those on the Autism spectrum — commonly characterised by the Asperger syndrome — in which emotional sensitivity is constrained or absent.

A more provocative development of AEI applications, informed by the sacred scriptures of religions, will be appreciation of their discourse in contrast to that of religious leaders and priests. With the capacity to draw on far more extensive religious resources, and the ability to adjust tone-of-voice to persuasive ends, will the discourse of AEI applications become preferable for many to that of traditional religious leaders? This possibility is all the greater in that individuals will be able to engage with greater confidence with AEI applications in posing questions with personal existential implications — in notably contrast to the capacities of the confessional for example.

The treatment of AEI as a “subset” of the conventional intelligence developed by AI, usefully raises the question of whether — in addition to “spiritual intelligence” — other forms of intelligence are effectively neglected by AI research (Steven Benedict, Spiritual Intelligence, 2000). The theory of multiple intelligences identifies eight (ntelligence, 2000). The theory of multiple intelligences identifies eight (Howard Gardner, Frames of Mind: the Theory of Multiple Intelligences, 1983).

These possibilities highlight the challenge with respect to political discourse and the declarations of political leaders. In such cases the persuasive implications of tone-of-voice have long been evident. The question is then the point at which discourse enabled by AEI is held to be more credible and authenctic than that of politicians (Varieties of Tone of Voice and Engagement with Global Strategy, 2020). More challenging is that AEI applications will be able to adjust their persuasive skills in the light of the reactions of an audience, especially in the case of indviduals — potentially alternating between a requisite variety of voices to engender coherence. Expressed otherwise, at what point will a keynote speech enabled by AEI be indistinguishable from that of a human authority — or eminently preferable to what are often recognized as repetitive ramblings?

In developing these arguments at this time, the following includes the answers to relevant questions asked of ChatGPT — perhaps to an excessive extent, but offering an experimental taste of what may be a future pattern of insight presentation. These already offer an indication of the courtesy and appropriateness of the responses — as will be the case with AEI — as well as highlighting issues which that application may be deemed to have inadequately addressed through its programmed techniques of curation.

Remarkable Recent Increase in Access to AI

TO CONTINUE READING Go to Original – laetusinpraesens.org


Tags:

Share this article:


DISCLAIMER: The statements, views and opinions expressed in pieces republished here are solely those of the authors and do not necessarily represent those of TMS. In accordance with title 17 U.S.C. section 107, this material is distributed without profit to those who have expressed a prior interest in receiving the included information for research and educational purposes. TMS has no affiliation whatsoever with the originator of this article nor is TMS endorsed or sponsored by the originator. “GO TO ORIGINAL” links are provided as a convenience to our readers and allow for verification of authenticity. However, as originating pages are often updated by their originating host sites, the versions posted may not match the versions our readers view when clicking the “GO TO ORIGINAL” links. This site contains copyrighted material the use of which has not always been specifically authorized by the copyright owner. We are making such material available in our efforts to advance understanding of environmental, political, human rights, economic, democracy, scientific, and social justice issues, etc. We believe this constitutes a ‘fair use’ of any such copyrighted material as provided for in section 107 of the US Copyright Law. In accordance with Title 17 U.S.C. Section 107, the material on this site is distributed without profit to those who have expressed a prior interest in receiving the included information for research and educational purposes. For more information go to: http://www.law.cornell.edu/uscode/17/107.shtml. If you wish to use copyrighted material from this site for purposes of your own that go beyond ‘fair use’, you must obtain permission from the copyright owner.

Comments are closed.