{"id":235095,"date":"2023-05-15T12:00:38","date_gmt":"2023-05-15T11:00:38","guid":{"rendered":"https:\/\/www.transcend.org\/tms\/?p=235095"},"modified":"2023-06-20T05:49:39","modified_gmt":"2023-06-20T04:49:39","slug":"ai-machines-arent-hallucinating-but-their-makers-are","status":"publish","type":"post","link":"https:\/\/www.transcend.org\/tms\/2023\/05\/ai-machines-arent-hallucinating-but-their-makers-are\/","title":{"rendered":"AI Machines Aren\u2019t \u2018Hallucinating\u2019. But Their Makers Are"},"content":{"rendered":"<div class=\"dcr-1yi1cnj\" data-gu-name=\"standfirst\">\n<div class=\" dcr-ysrxk6\">\n<blockquote><p><em>Tech CEOs want us to believe that generative AI will benefit humanity. They are kidding themselves.<\/em><\/p><\/blockquote>\n<\/div>\n<\/div>\n<div id=\"attachment_235097\" style=\"width: 410px\" class=\"wp-caption aligncenter\"><a href=\"https:\/\/www.transcend.org\/tms\/wp-content\/uploads\/2023\/05\/ai-robot-big-tech.webp\" ><img loading=\"lazy\" decoding=\"async\" aria-describedby=\"caption-attachment-235097\" class=\"wp-image-235097\" src=\"https:\/\/www.transcend.org\/tms\/wp-content\/uploads\/2023\/05\/ai-robot-big-tech.webp\" alt=\"\" width=\"400\" height=\"240\" srcset=\"https:\/\/www.transcend.org\/tms\/wp-content\/uploads\/2023\/05\/ai-robot-big-tech.webp 620w, https:\/\/www.transcend.org\/tms\/wp-content\/uploads\/2023\/05\/ai-robot-big-tech-300x180.webp 300w\" sizes=\"auto, (max-width: 400px) 100vw, 400px\" \/><\/a><p id=\"caption-attachment-235097\" class=\"wp-caption-text\">\u2018And their goal never was to solve climate change or make our governments more responsible or our daily lives more leisurely.\u2019<br \/>Illustration: LiliGraphie\/Alamy<\/p><\/div>\n<p class=\"dcr-94xsh\"><em>8 May 2023 &#8211; <\/em>Inside the many debates swirling around the rapid rollout of so-called artificial intelligence, there is a relatively obscure skirmish focused on the choice of the word \u201challucinate\u201d.<\/p>\n<div id=\"maincontent\" class=\"dcr-1tbm7dz\">\n<div class=\"article-body-commercial-selector article-body-viewer-selector dcr-18i1c38\">\n<p class=\"dcr-94xsh\">This is the term that architects and boosters of generative AI have settled on to characterize responses served up by chatbots that are wholly manufactured, or flat-out wrong. Like, for instance, when you ask a bot for a definition of something that doesn\u2019t exist and it, rather convincingly, gives you <a target=\"_blank\" href=\"https:\/\/www.wsj.com\/articles\/hallucination-when-chatbots-and-people-see-what-isnt-there-91c6c88b\"  data-link-name=\"in body link\">one<\/a>, complete with made-up footnotes. \u201cNo one in the field has yet solved the hallucination problems,\u201d Sundar Pichai, the CEO of Google and Alphabet, <a target=\"_blank\" href=\"https:\/\/www.cbs.com\/shows\/video\/SR6ZcCYjoD3O0sn_ZmVUw87daawsZ5V3\/\"  data-link-name=\"in body link\">told<\/a> an interviewer recently.<\/p>\n<p class=\"dcr-94xsh\">That\u2019s true \u2013 but why call the errors \u201challucinations\u201d at all? Why not algorithmic junk? Or glitches? Well, hallucination refers to the mysterious capacity of the human brain to perceive phenomena that are not present, at least not in conventional, materialist terms. By appropriating a word commonly used in psychology, psychedelics and various forms of mysticism, AI\u2019s boosters, while acknowledging the fallibility of their machines, are simultaneously feeding the sector\u2019s most cherished mythology: that by building these large language models, and training them on everything that we humans have written, said and represented visually, they are in the process of birthing an animate intelligence on the cusp of sparking an evolutionary leap for our species. How else could bots like Bing and Bard be tripping out there in the ether?<\/p>\n<p class=\"dcr-94xsh\">Warped hallucinations are indeed afoot in the world of AI, however \u2013 but it\u2019s not the bots that are having them; it\u2019s the tech CEOs who unleashed them, along with a phalanx of their fans, who are in the grips of wild hallucinations, both individually and collectively. Here I am defining hallucination not in the mystical or psychedelic sense, mind-altered states that can indeed assist in accessing profound, previously unperceived truths. No. These folks are just tripping: seeing, or at least claiming to see, evidence that is not there at all, even conjuring entire worlds that will put their products to use for our universal elevation and education.<\/p>\n<p class=\"dcr-94xsh\">Generative AI will end poverty, they tell us. It will cure all disease. It will solve climate change. It will make our jobs more meaningful and exciting. It will unleash lives of leisure and contemplation, helping us reclaim the humanity we have lost to late capitalist mechanization. It will end loneliness. It will make our governments rational and responsive. These, I fear, are the real AI hallucinations and we have all been hearing them on a loop ever since Chat GPT launched at the end of last year.<\/p>\n<p class=\"dcr-94xsh\">There is a world in which generative AI, as a powerful predictive research tool and a performer of tedious tasks, could indeed be marshalled to <a target=\"_blank\" href=\"https:\/\/www.nature.com\/articles\/d41586-020-03348-4\"  data-link-name=\"in body link\">benefit<\/a> humanity, other species and our shared home. But for that to happen, these technologies would need to be deployed inside a vastly different economic and social order than our own, one that had as its purpose the meeting of human needs and the protection of the planetary systems that support all life.<\/p>\n<p class=\"dcr-94xsh\">And as those of us who are not currently tripping well understand, our current system is nothing like that. Rather, it is built to maximize the extraction of wealth and profit \u2013 from both humans and the natural world \u2013 a reality that has brought us to what we might think of it as capitalism\u2019s techno-necro stage. In that reality of hyper-concentrated power and wealth, AI \u2013 far from living up to all those utopian hallucinations \u2013 is much more likely to become a fearsome tool of further dispossession and despoilation.<\/p>\n<p class=\"dcr-94xsh\">I\u2019ll dig into why that is so. But first, it\u2019s helpful to think about the <em>purpose<\/em> the utopian hallucinations about AI are serving. What work are these benevolent stories doing in the culture as we encounter these strange new tools? Here is one hypothesis: they are the powerful and enticing cover stories for what may turn out to be the largest and most consequential theft in human history. Because what we are witnessing is the wealthiest companies in history (Microsoft, Apple, Google, Meta, Amazon \u2026) unilaterally seizing the sum total of human knowledge that exists in digital, scrapable form and walling it off inside proprietary products, many of which will take direct aim at the humans whose lifetime of labor trained the machines without giving permission or consent.<\/p>\n<p class=\"dcr-94xsh\">This should not be legal. In the case of copyrighted material that we now <a target=\"_blank\" href=\"https:\/\/www.washingtonpost.com\/technology\/interactive\/2023\/ai-chatbot-learning\/\"  data-link-name=\"in body link\">know<\/a> trained the models (including this newspaper), various <a target=\"_blank\" href=\"https:\/\/news.artnet.com\/art-world\/class-action-lawsuit-ai-generators-deviantart-midjourney-stable-diffusion-2246770\"  data-link-name=\"in body link\">lawsuits<\/a> have been filed that will argue this was clearly illegal. Why, for instance, should a for-profit company be permitted to feed the paintings, drawings and photographs of living artists into a program like Stable Diffusion or Dall-E 2 so it can then be used to generate doppelganger versions of those very artists\u2019 work, with the benefits flowing to everyone but the artists themselves?<\/p>\n<p class=\"dcr-94xsh\">The painter and illustrator Molly Crabapple is helping lead a movement of artists challenging this theft. \u201cAI art generators are trained on enormous datasets, containing millions upon millions of copyrighted images, harvested without their creator\u2019s knowledge, let alone compensation or consent. This is effectively the greatest art heist in history. Perpetrated by respectable-seeming corporate entities backed by Silicon Valley venture capital. It\u2019s daylight robbery,\u201d a new <a target=\"_blank\" href=\"https:\/\/artisticinquiry.org\/AI-Open-Letter\"  data-link-name=\"in body link\">open<\/a> letter she co-drafted states.<\/p>\n<p class=\"dcr-94xsh\">The trick, of course, is that Silicon Valley routinely calls theft \u201cdisruption\u201d \u2013 and too often gets away with it. We know this move: charge ahead into lawless territory; claim the old rules don\u2019t apply to your new tech; scream that regulation will only help China \u2013 all while you get your facts solidly on the ground. By the time we all get over the novelty of these new toys and start taking stock of the social, political and economic wreckage, the tech is already so ubiquitous that the <a target=\"_blank\" href=\"https:\/\/www.reuters.com\/article\/us-google-books-idUSKCN0SA1S020151016\"  data-link-name=\"in body link\">courts<\/a> and policymakers throw up their hands.<\/p>\n<p class=\"dcr-94xsh\">We saw it with Google\u2019s book and art scanning. With Musk\u2019s space colonization. With Uber\u2019s assault on the taxi industry. With Airbnb\u2019s attack on the rental market. With Facebook\u2019s promiscuity with our data. Don\u2019t ask for permission, the disruptors like to say, ask for forgiveness. (And lubricate the asks with generous campaign contributions.)<\/p>\n<p class=\"dcr-94xsh\">In The Age of Surveillance Capitalism, <a target=\"_blank\" href=\"https:\/\/www.theguardian.com\/technology\/2019\/jan\/20\/shoshana-zuboff-age-of-surveillance-capitalism-google-facebook\"  data-link-name=\"in body link\">Shoshana Zuboff<\/a> meticulously details how Google\u2019s Street View maps steamrolled over privacy norms by sending its camera-bedecked cars out to photograph our public roadways and the exteriors of our homes. By the time the lawsuits defending privacy rights rolled around, Street View was already so ubiquitous on our devices (and so cool, and so convenient \u2026) that few courts outside <a target=\"_blank\" href=\"https:\/\/archive.nytimes.com\/bits.blogs.nytimes.com\/2013\/04\/23\/germanys-complicated-relationship-with-google-street-view\/\"  data-link-name=\"in body link\">Germany<\/a> were willing to intervene.<\/p>\n<p class=\"dcr-94xsh\">Now the same thing that happened to the exterior of our homes is happening to our words, our images, our songs, our entire digital lives. All are currently being seized and used to train the machines to simulate thinking and creativity. These companies must know they are engaged in theft, or at least that a <a target=\"_blank\" href=\"https:\/\/hbr.org\/2023\/04\/generative-ai-has-an-intellectual-property-problem\"  data-link-name=\"in body link\">strong case<\/a> can be made that they are. They are just hoping that the old playbook works one more time \u2013 that the scale of the heist is already so large and unfolding with such <a target=\"_blank\" href=\"https:\/\/www.reuters.com\/technology\/chatgpt-sets-record-fastest-growing-user-base-analyst-note-2023-02-01\/\"  data-link-name=\"in body link\">speed<\/a> that courts and policymakers will once again throw up their hands in the face of the supposed inevitability of it all.<\/p>\n<p class=\"dcr-94xsh\">It\u2019s also why their hallucinations about all the wonderful things that AI will do for humanity are so important. Because those lofty claims disguise this mass theft as a gift \u2013 at the same time as they help rationalize AI\u2019s undeniable perils.<\/p>\n<p class=\"dcr-94xsh\">By now, most of us have heard about the <a target=\"_blank\" href=\"https:\/\/aiimpacts.org\/2022-expert-survey-on-progress-in-ai\/#Extinction_from_AI\"  data-link-name=\"in body link\">survey<\/a> that asked AI researchers and developers to estimate the probability that advanced AI systems will cause \u201chuman extinction or similarly permanent and severe disempowerment of the human species\u201d. Chillingly, the median response was that there was a 10% chance.<\/p>\n<p class=\"dcr-94xsh\">How does one rationalize going to work and pushing out tools that carry such existential risks? Often, the reason given is that these systems also carry huge potential upsides \u2013 except that these upsides are, for the most part, hallucinatory. Let\u2019s dig into a few of the wilder ones.<\/p>\n<h2 id=\"hallucination-1-ai-will-solve-the-climate-crisis\"><strong>Hallucination #1: AI <\/strong><strong>will <\/strong><strong>solve <\/strong><strong>the climate <\/strong><strong>crisis<\/strong><\/h2>\n<p class=\"dcr-94xsh\">Almost invariably topping the lists of AI upsides is the claim that these systems will somehow solve the climate crisis. We have heard this from everyone from the <a target=\"_blank\" href=\"https:\/\/www.weforum.org\/agenda\/2021\/08\/how-ai-can-fight-climate-change\/\"  data-link-name=\"in body link\">World Economic Forum<\/a> to the <a target=\"_blank\" href=\"https:\/\/world101.cfr.org\/global-era-issues\/climate-change\/how-can-artificial-intelligence-combat-climate-change\"  data-link-name=\"in body link\">Council on Foreign Relations<\/a> to <a target=\"_blank\" href=\"https:\/\/www.bcg.com\/publications\/2022\/how-ai-can-help-climate-change\"  data-link-name=\"in body link\">Boston Consulting Group<\/a>, which explains that AI \u201ccan be used to support all stakeholders in taking a more informed and data-driven approach to combating carbon emissions and building a greener society. It can also be employed to reweight global climate efforts toward the most at-risk regions.\u201d The former Google CEO Eric Schmidt summed up the case when he <a target=\"_blank\" href=\"https:\/\/www.theatlantic.com\/technology\/archive\/2023\/03\/open-ai-gpt4-chatbot-technology-power\/673421\/\"  data-link-name=\"in body link\">told<\/a> the Atlantic that AI\u2019s risks were worth taking, because \u201cIf you think about the biggest problems in the world, they are all really hard \u2013 climate change, human organizations, and so forth. And so, I always want people to be smarter.\u201d<\/p>\n<p class=\"dcr-94xsh\">According to this logic, the failure to \u201csolve\u201d big problems like climate change is due to a deficit of smarts. Never mind that smart people, heavy with PhDs and Nobel prizes, have been telling our governments for decades what needs to happen to get out of this mess: slash our emissions, leave carbon in the ground, tackle the overconsumption of the rich and the underconsumption of the poor because no energy source is free of ecological costs.<\/p>\n<p class=\"dcr-94xsh\">The reason this very smart counsel has been ignored is not due to a reading comprehension problem, or because we somehow need machines to do our thinking for us. It\u2019s because doing what the climate crisis demands of us would strand <a target=\"_blank\" href=\"https:\/\/www.wsj.com\/articles\/trillions-in-assets-may-be-left-stranded-as-companies-address-climate-change-11637416980\"  data-link-name=\"in body link\">trillions of dollars<\/a> of fossil fuel assets, while challenging the consumption-based growth model at the heart of our interconnected economies. The climate crisis is not, in fact, a mystery or a riddle we haven\u2019t yet solved due to insufficiently robust data sets. We know what it would take, but it\u2019s not a quick fix \u2013 it\u2019s a paradigm shift. Waiting for machines to spit out a more palatable and\/or profitable answer is not a cure for this crisis, it\u2019s one more symptom of it.<\/p>\n<p class=\"dcr-94xsh\">Clear away the hallucinations and it looks far more likely that AI will be brought to market in ways that actively deepen the climate crisis. First, the giant servers that make instant essays and artworks from chatbots possible are an enormous and growing <a target=\"_blank\" href=\"https:\/\/penntoday.upenn.edu\/news\/hidden-costs-ai-impending-energy-and-resource-strain\"  data-link-name=\"in body link\">source<\/a> of carbon emissions. Second, as companies like Coca-Cola start making <a target=\"_blank\" href=\"https:\/\/www.coca-colacompany.com\/news\/coca-cola-invites-digital-artists-to-create-real-magic-using-new-ai-platform\"  data-link-name=\"in body link\">huge investments<\/a> to use generative AI to sell more products, it\u2019s becoming all too clear that this new tech will be used in the same ways as the last generation of digital tools: that what begins with lofty promises about spreading freedom and democracy ends up micro targeting ads at us so that we buy more useless, carbon-spewing stuff.<\/p>\n<p class=\"dcr-94xsh\">And there is a third factor, this one a little harder to pin down. The more our media channels are flooded with deep fakes and clones of various kinds, the more we have the feeling of sinking into informational quicksand. Geoffrey Hinton, often referred to as \u201cthe godfather of AI\u201d because the neural net he developed more than a decade ago forms the building blocks of today\u2019s large language models, understands this well. He just quit a senior role at Google so that he could speak freely about the risks of the technology he helped create, including, as he <a target=\"_blank\" href=\"https:\/\/www.nytimes.com\/2023\/05\/01\/technology\/ai-google-chatbot-engineer-quits-hinton.html\"  data-link-name=\"in body link\">told<\/a> the New York Times, the risk that people will \u201cnot be able to know what is true anymore\u201d.<\/p>\n<p class=\"dcr-94xsh\">This is highly relevant to the claim that AI will help battle the climate crisis. Because when we are mistrustful of everything we read and see in our increasingly uncanny media environment, we become even less equipped to solve pressing collective problems. The crisis of trust predates ChatGPT, of course, but there is no question that a proliferation of deep fakes will be accompanied by an exponential increase in already thriving conspiracy cultures. So what difference will it make if AI comes up with technological and scientific breakthroughs? If the fabric of shared reality is unravelling in our hands, we will find ourselves unable to respond with any coherence at all.<\/p>\n<h2 id=\"hallucination-2-ai-will-deliver-wise-governance\"><strong>Hallucination #2: AI <\/strong><strong>will <\/strong><strong>deliver <\/strong><strong>wise <\/strong><strong>governance<\/strong><\/h2>\n<p class=\"dcr-94xsh\">This hallucination summons a near future in which politicians and bureaucrats, drawing on the vast aggregated intelligence of AI systems, are able \u201cto see patterns of need and develop evidence-based programs\u201d that have greater benefits to their constituents . That claim comes from a <a target=\"_blank\" href=\"https:\/\/www.centreforpublicimpact.org\/\"  data-link-name=\"in body link\">paper<\/a> published by the Boston Consulting Group\u2019s foundation, but it is being echoed inside many thinktanks and management consultancies. And it\u2019s telling that these particular companies \u2013 the firms hired by governments and other corporations to identify costs savings, often by firing large numbers of workers \u2013 have been quickest to jump on the AI bandwagon. PwC (formerly PricewaterhouseCoopers) just <a target=\"_blank\" href=\"https:\/\/venturebeat.com\/ai\/the-power-of-infrastructure-purpose-built-for-ai\/\"  data-link-name=\"in body link\">announced<\/a> a $1bn investment, and Bain &amp; Company as well as Deloitte are reportedly enthusiastic about using these tools to make their clients more \u201cefficient\u201d.<\/p>\n<p class=\"dcr-94xsh\">As with the climate claims, it is necessary to ask: is the reason politicians impose cruel and ineffective policies that they suffer from a lack of evidence? An inability to \u201csee patterns,\u201d as the BCG paper suggests? Do they not understand the human costs of <a target=\"_blank\" href=\"https:\/\/www.theguardian.com\/society\/2022\/aug\/03\/how-the-tory-party-has-systematically-run-down-the-nhs\"  data-link-name=\"in body link\">starving<\/a> public healthcare amid pandemics, or of failing to invest in non-market housing when tents fill our urban parks, or of approving new fossil fuel infrastructure while temperatures soar? Do they need AI to make them \u201csmarter\u201d, to use Schmidt\u2019s term \u2013 or are they precisely smart enough to know who is going to underwrite their next campaign, or, if they stray, bankroll their rivals?<\/p>\n<p class=\"dcr-94xsh\">It would be awfully nice if AI really could sever the link between corporate money and reckless policy making \u2013 but that link has everything to do with why companies like Google and Microsoft have been allowed to release their chatbots to the public despite the avalanche of warnings and known risks. Schmidt and others have been on a years-long lobbying campaign <a target=\"_blank\" href=\"https:\/\/epic.org\/wp-content\/uploads\/foia\/epic-v-ai-commission\/EPIC-19-09-11-NSCAI-FOIA-20200331-3rd-Production-pt9.pdf\"  data-link-name=\"in body link\">telling<\/a> both parties in Washington that if they aren\u2019t free to barrel ahead with generative AI, unburdened by serious regulation, then western powers will be left in the dust by China. Last year, the top tech companies <a target=\"_blank\" href=\"https:\/\/www.bnnbloomberg.ca\/tech-giants-broke-their-spending-records-on-lobbying-last-year-1.1877988\"  data-link-name=\"in body link\">spent<\/a> a record $70m to lobby Washington \u2013 more than the oil and gas sector \u2013 and that sum, Bloomberg News notes, is on top of the millions spent \u201con their wide array of trade groups, non-profits and thinktanks\u201d.<\/p>\n<p class=\"dcr-94xsh\">And yet despite their intimate knowledge of precisely how money shapes policy in our national capitals, when you listen to Sam Altman, the CEO of OpenAI \u2013 maker of ChatGPT \u2013 talk about the best-case scenarios for his products, all of this seems to be forgotten. Instead, he seems to be hallucinating a world entirely unlike our own, one in which politicians and industry make decisions based on the best data and would never put countless lives at risk for profit and geopolitical advantage. Which brings us to another hallucination.<\/p>\n<h2 id=\"hallucination-3-tech-giants-can-be-trusted-not-to-break-the-world\"><strong>Hallucination #3: <\/strong><strong>tech <\/strong><strong>giants <\/strong><strong>can <\/strong><strong>be <\/strong><strong>trusted <\/strong><strong>not to <\/strong><strong>break the <\/strong><strong>world<\/strong><\/h2>\n<p class=\"dcr-94xsh\"><a target=\"_blank\" href=\"https:\/\/steno.ai\/lex-fridman-podcast-10\/367-sam-altman-openai-ceo-on-gpt-4-chatgpt-and\"  data-link-name=\"in body link\">Asked<\/a> if he is worried about the frantic gold rush ChatGPT has already unleashed, Altman said he is, but added sanguinely: \u201cHopefully it will all work out.\u201d Of his fellow tech CEOs \u2013 the ones competing to rush out their rival chatbots \u2013 he said: \u201cI think the better angels are going to win out.\u201d<\/p>\n<p class=\"dcr-94xsh\">Better angels? At Google? I\u2019m pretty sure the company <a target=\"_blank\" href=\"https:\/\/www.engadget.com\/google-fires-ai-researcher-over-paper-challenge-132640478.html\"  data-link-name=\"in body link\">fired<\/a> most of those because they were publishing critical papers about AI, or calling the company out on racism and sexual harassment in the workplace. More \u201cbetter angels\u201d have <a target=\"_blank\" href=\"https:\/\/www.engadget.com\/google-engineers-leave-over-timnit-gebru-exit-093645678.html\"  data-link-name=\"in body link\">quit<\/a> in alarm, most recently Hinton. That\u2019s because, contrary to the hallucinations of the people profiting most from AI, Google does not make decisions based on what\u2019s best for the world \u2013 it makes decisions based on what\u2019s best for Alphabet\u2019s shareholders, who do not want to miss the latest bubble, not when Microsoft, Meta and Apple are already all in.<\/p>\n<h2 id=\"hallucination-4-ai-will-liberate-us-from-drudgery\"><strong>Hallucination #<\/strong><strong>4<\/strong><strong>: AI <\/strong><strong>will <\/strong><strong>liberate <\/strong><strong>us <\/strong><strong>from <\/strong><strong>drudgery<\/strong><\/h2>\n<p class=\"dcr-94xsh\">If Silicon Valley\u2019s benevolent hallucinations seem plausible to many, there is a simple reason for that. Generative AI is currently in what we might think of as its faux-socialism stage. This is part of a now familiar Silicon Valley playbook. First, create an attractive product (a search engine, a mapping tool, a social network, a video platform, a ride share \u2026); give it away for free or almost free for a few years, with no discernible viable business model (\u201cPlay around with the bots,\u201d they tell us, \u201csee what fun things you can create!\u201d); make lots of lofty claims about how you are only doing it because you want to create a \u201ctown square\u201d or an \u201cinformation commons\u201d or \u201cconnect the people\u201d, all while spreading freedom and democracy (and not being \u201cevil\u201d). Then watch as people get hooked using these free tools and your competitors declare bankruptcy. Once the field is clear, introduce the targeted ads, the constant surveillance, the police and military contracts, the black-box data sales and the escalating subscription fees.<\/p>\n<p class=\"dcr-94xsh\">Many lives and sectors have been decimated by earlier iterations of this playbook, from taxi drivers to rental markets to local newspapers. With the AI revolution, these kinds of losses could look like rounding errors, with teachers, coders, visual artists, journalists, translators, musicians, care workers and so many others facing the prospect of having their incomes replaced by glitchy code.<\/p>\n<p class=\"dcr-94xsh\">Don\u2019t worry, the AI enthusiasts hallucinate \u2013 it will be wonderful. Who likes work anyway? Generative AI won\u2019t be the end of employment, we are told, only \u201c<a target=\"_blank\" href=\"https:\/\/www.nytimes.com\/2023\/04\/22\/opinion\/jobs-ai-chatgpt.html\"  data-link-name=\"in body link\">boring work<\/a>\u201d \u2013 with chatbots helpfully doing all the soul-destroying, repetitive tasks and humans merely supervising them. Altman, for his part, <a target=\"_blank\" href=\"https:\/\/steno.ai\/lex-fridman-podcast-10\/367-sam-altman-openai-ceo-on-gpt-4-chatgpt-and\"  data-link-name=\"in body link\">sees<\/a> a future where work \u201ccan be a broader concept, not something you have to do to be able to eat, but something you do as a creative expression and a way to find fulfillment and happiness\u201d.<\/p>\n<p class=\"dcr-94xsh\">That\u2019s an exciting vision of a more beautiful, leisurely life, one many leftists share (including Karl Marx\u2019s son-in-law, Paul Lafargue, who wrote a <a target=\"_blank\" href=\"https:\/\/www.marxists.org\/archive\/lafargue\/1883\/lazy\/\"  data-link-name=\"in body link\">manifesto<\/a> titled The Right To Be Lazy). But we leftists also know that if earning money is to no longer be life\u2019s driving imperative, then there must be other ways to meet our creaturely needs for shelter and sustenance. A world without crappy jobs means that rent has to be free, and healthcare has to be free, and every person has to have inalienable economic rights. And then suddenly we aren\u2019t talking about AI at all \u2013 we\u2019re talking about socialism.<\/p>\n<p class=\"dcr-94xsh\">Because we do not live in the Star Trek-inspired rational, humanist world that Altman seems to be hallucinating. We live under capitalism, and under that system, the effects of flooding the market with technologies that can plausibly perform the economic tasks of countless working people is not that those people are suddenly free to become philosophers and artists. It means that those people will find themselves staring into the abyss \u2013 with actual artists among the first to fall.<\/p>\n<p class=\"dcr-94xsh\">That is the message of Crabapple\u2019s open letter, which calls on \u201cartists, publishers, journalists, editors and journalism union leaders to take a pledge for human values against the use of generative-AI images\u201d and \u201ccommit to supporting editorial art made by people, not server farms\u201d. The letter, now <a target=\"_blank\" href=\"https:\/\/artisticinquiry.org\/AI-Open-Letter\"  data-link-name=\"in body link\">signed<\/a> by hundreds of artists, journalists and others, states that all but the most elite artists find their work \u201cat risk of extinction\u201d. And according to Hinton, the \u201cgodfather of AI\u201d, there is no reason to believe that the threat won\u2019t spread. The chatbots take \u201caway the drudge work\u201d but \u201cit might take away more than that\u201d.<\/p>\n<p class=\"dcr-94xsh\">Crabapple and her co-authors write: \u201cGenerative AI art is vampirical, feasting on past generations of artwork even as it sucks the lifeblood from living artists.\u201d But there are ways to resist: we can refuse to use these products and organize to demand that our employers and governments reject them as well. A <a target=\"_blank\" href=\"https:\/\/www.dair-institute.org\/blog\/letter-statement-March2023\"  data-link-name=\"in body link\">letter<\/a> from prominent scholars of AI ethics, including Timnit Gebru who was fired by Google in 2020 for challenging workplace discrimination, lays out some of the regulatory tools that governments can introduce immediately \u2013 including full transparency about what data sets are being used to train the models. The authors write: \u201cNot only should it always be clear when we are encountering synthetic media, but organizations building these systems should also be required to document and disclose the training data and model architectures \u2026. We should be building machines that work for us, instead of \u2018adapting\u2019 society to be machine readable and writable.\u201d<\/p>\n<p class=\"dcr-94xsh\">Though tech companies would like us to believe that it is already too late to roll back this human-replacing, mass-mimicry product there are highly relevant legal and regulatory precedents that can be enforced. For instance, the US Federal Trade Commission (FTC) <a target=\"_blank\" href=\"https:\/\/digiday.com\/media\/why-the-ftc-is-forcing-tech-firms-to-kill-their-algorithms-along-with-ill-gotten-data\/\"  data-link-name=\"in body link\">forced<\/a> Cambridge Analytica, as well as Everalbum, the owner of a photo app, to destroy entire algorithms found to have been trained on illegitimately appropriated data and scraped photos. In its early days, the Biden administration made many bold claims about regulating big tech, including cracking down on the theft of personal data to build proprietary algorithms. With a presidential election fast approaching, now would be a good time to make good on those promises \u2013 and avert the next set of mass layoffs before they happen.<\/p>\n<p class=\"dcr-94xsh\">A world of deep fakes, mimicry loops and worsening inequality is not an inevitability. It\u2019s a set of policy choices. We can regulate the current form of vampiric chatbots out of existence \u2013 and begin to build the world in which AI\u2019s most exciting promises would be more than Silicon Valley hallucinations.<\/p>\n<p class=\"dcr-94xsh\">Because we trained the machines. All of us. But we never gave our consent. They fed on humanity\u2019s collective ingenuity, inspiration and revelations (along with our more venal traits). These models are enclosure and appropriation machines, devouring and privatizing our individual lives as well as our collective intellectual and artistic inheritances. And their goal never was to solve climate change or make our governments more responsible or our daily lives more leisurely. It was always to profit off mass immiseration, which, under capitalism, is the glaring and logical consequence of replacing human functions with bots.<\/p>\n<p class=\"dcr-94xsh\">Is all of this overly dramatic? A stuffy and reflexive resistance to exciting innovation? Why expect the worse? Altman <a target=\"_blank\" href=\"https:\/\/steno.ai\/lex-fridman-podcast-10\/367-sam-altman-openai-ceo-on-gpt-4-chatgpt-and\"  data-link-name=\"in body link\">reassures<\/a> us: \u201cNobody wants to destroy the world.\u201d Perhaps not. But as the ever-worsening climate and extinction crises show us every day, plenty of powerful people and institutions seem to be just fine knowing that they are helping to destroy the stability of the world\u2019s life-support systems, so long as they can keep making <a target=\"_blank\" href=\"https:\/\/www.theguardian.com\/business\/2023\/apr\/28\/exxonmobil-chevron-record-profits\"  data-link-name=\"in body link\">record<\/a> profits that they believe will protect them and their families from the worst effects. Altman, like many creatures of Silicon Valley, is himself a prepper: back in 2016, he <a target=\"_blank\" href=\"https:\/\/www.newyorker.com\/magazine\/2016\/10\/10\/sam-altmans-manifest-destiny\"  data-link-name=\"in body link\">boasted<\/a>: \u201cI have guns, gold, potassium iodide, antibiotics, batteries, water, gas masks from the Israeli Defense Force and a big patch of land in Big Sur I can fly to.\u201d<\/p>\n<p class=\"dcr-94xsh\">I\u2019m pretty sure those facts say a lot more about what Altman actually believes about the future he is helping unleash than whatever flowery hallucinations he is choosing to share in press interviews.<\/p>\n<p><em>____________________________________________<\/em><\/p>\n<p style=\"padding-left: 40px;\"><em><a href=\"https:\/\/www.transcend.org\/tms\/wp-content\/uploads\/2023\/05\/Naomi_Klein.webp\" ><img loading=\"lazy\" decoding=\"async\" class=\"alignleft wp-image-235096\" src=\"https:\/\/www.transcend.org\/tms\/wp-content\/uploads\/2023\/05\/Naomi_Klein.webp\" alt=\"\" width=\"100\" height=\"100\" \/><\/a> Naomi Klein is the award-winning author of the international bestseller, <\/em>No Logo: Taking Aim at the Brand Bullies<em>, translated into 28 languages. She writes an internationally syndicated column for <\/em>The Nation<em> magazine and the <\/em>Guardian<em> newspaper. She is a former Miliband Fellow at the London School of Economics and holds an honorary Doctor of Civil Laws from the University of King&#8217;s College, Nova Scotia. Her book <\/em>The Shock Doctrine: The Rise of Disaster Capitalism <em>was published worldwide in 2007. <\/em><a target=\"_blank\" href=\"https:\/\/www.simonandschuster.com\/books\/How-to-Change-Everything\/Naomi-Klein\/9781534474529\" >How to Change Everything: The Young Human\u2019s Guide to Protecting the Planet and Each Other<\/a> <em>by Naomi Klein (Simon &amp; Schuster, 2021), is out now.<\/em><\/p>\n<p><a target=\"_blank\" href=\"https:\/\/www.theguardian.com\/commentisfree\/2023\/may\/08\/ai-machines-hallucinating-naomi-klein\" >Go to Original &#8211; theguardian.com<\/a><\/p>\n<\/div>\n<\/div>\n","protected":false},"excerpt":{"rendered":"<p>8 May 2023 &#8211; Tech CEOs want us to believe that generative Artificial Intelligence will benefit humanity. They are kidding themselves<\/p>\n","protected":false},"author":4,"featured_media":235096,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[3078],"tags":[1733,1009,307],"class_list":["post-235095","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-artificial-intelligence-ai","tag-artificial-intelligence-ai","tag-big-tech","tag-humanity"],"_links":{"self":[{"href":"https:\/\/www.transcend.org\/tms\/wp-json\/wp\/v2\/posts\/235095","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.transcend.org\/tms\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.transcend.org\/tms\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.transcend.org\/tms\/wp-json\/wp\/v2\/users\/4"}],"replies":[{"embeddable":true,"href":"https:\/\/www.transcend.org\/tms\/wp-json\/wp\/v2\/comments?post=235095"}],"version-history":[{"count":2,"href":"https:\/\/www.transcend.org\/tms\/wp-json\/wp\/v2\/posts\/235095\/revisions"}],"predecessor-version":[{"id":235101,"href":"https:\/\/www.transcend.org\/tms\/wp-json\/wp\/v2\/posts\/235095\/revisions\/235101"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.transcend.org\/tms\/wp-json\/wp\/v2\/media\/235096"}],"wp:attachment":[{"href":"https:\/\/www.transcend.org\/tms\/wp-json\/wp\/v2\/media?parent=235095"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.transcend.org\/tms\/wp-json\/wp\/v2\/categories?post=235095"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.transcend.org\/tms\/wp-json\/wp\/v2\/tags?post=235095"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}