{"id":270521,"date":"2024-07-29T12:00:27","date_gmt":"2024-07-29T11:00:27","guid":{"rendered":"https:\/\/www.transcend.org\/tms\/?p=270521"},"modified":"2024-07-29T05:32:18","modified_gmt":"2024-07-29T04:32:18","slug":"chatgpt-isnt-hallucinating-its-bullshitting","status":"publish","type":"post","link":"https:\/\/www.transcend.org\/tms\/2024\/07\/chatgpt-isnt-hallucinating-its-bullshitting\/","title":{"rendered":"ChatGPT Isn\u2019t \u2018Hallucinating\u2019\u2014It\u2019s Bullshitting!"},"content":{"rendered":"<div id=\"attachment_270522\" style=\"width: 410px\" class=\"wp-caption aligncenter\"><a href=\"https:\/\/www.transcend.org\/tms\/wp-content\/uploads\/2024\/07\/robot-ai.webp\" ><img loading=\"lazy\" decoding=\"async\" aria-describedby=\"caption-attachment-270522\" class=\"wp-image-270522\" src=\"https:\/\/www.transcend.org\/tms\/wp-content\/uploads\/2024\/07\/robot-ai-1024x683.webp\" alt=\"\" width=\"400\" height=\"267\" srcset=\"https:\/\/www.transcend.org\/tms\/wp-content\/uploads\/2024\/07\/robot-ai-1024x683.webp 1024w, https:\/\/www.transcend.org\/tms\/wp-content\/uploads\/2024\/07\/robot-ai-300x200.webp 300w, https:\/\/www.transcend.org\/tms\/wp-content\/uploads\/2024\/07\/robot-ai-768x512.webp 768w, https:\/\/www.transcend.org\/tms\/wp-content\/uploads\/2024\/07\/robot-ai.webp 1200w\" sizes=\"auto, (max-width: 400px) 100vw, 400px\" \/><\/a><p id=\"caption-attachment-270522\" class=\"wp-caption-text\">Malte Mueller\/Getty Images<\/p><\/div>\n<blockquote><p><em>It\u2019s important that we use accurate terminology when discussing how AI chatbots make up information.<\/em><\/p><\/blockquote>\n<p class=\"article__block-KZIY9\" data-block=\"sciam\/paragraph\"><em>17 Jul 2024\u00a0<\/em>&#8211; Right now artificial intelligence is everywhere. When you write a document, you\u2019ll probably be asked whether you need your \u201cAI assistant.\u201d Open a PDF and you might be asked whether you want an AI to provide you with a summary. But if you have used <a target=\"_blank\" href=\"https:\/\/chatgpt.com\/\" >ChatGPT<\/a> or similar programs, you\u2019re probably familiar with a certain problem\u2014<a target=\"_blank\" href=\"https:\/\/www.scientificamerican.com\/article\/chatbot-hallucinations-inevitable\/\" >it makes stuff up<\/a>, causing people to view things it says with suspicion.<\/p>\n<p class=\"article__block-KZIY9\" data-block=\"sciam\/paragraph\">It has become common to describe these errors as \u201c<a target=\"_blank\" href=\"https:\/\/www.jmir.org\/2024\/1\/e53164\/\" >hallucinations<\/a>.\u201d But talking about ChatGPT this way is misleading and potentially damaging. Instead call it bullshit.<\/p>\n<p class=\"article__block-KZIY9\" data-block=\"sciam\/paragraph\">We don\u2019t say this lightly. Among philosophers, \u201cbullshit\u201d has <a target=\"_blank\" href=\"https:\/\/onlinelibrary.wiley.com\/doi\/full\/10.1111\/theo.12271\" >a specialist meaning<\/a>, one popularized by the late American philosopher <a target=\"_blank\" href=\"https:\/\/academic.oup.com\/litthe\/article-abstract\/19\/4\/412\/955558?login=false\" >Harry Frankfurt<\/a>. When someone bullshits, they\u2019re not telling the truth, but they\u2019re also not really lying. What characterizes the bullshitter, Frankfurt said, is that they just don\u2019t care whether what they say is true. ChatGPT and its peers cannot care, and they are instead, in a technical sense, bullshit machines.<\/p>\n<p class=\"article__block-KZIY9\" data-block=\"sciam\/paragraph\">We can easily see why this is true and why it matters. Last year, for example, one lawyer found himself in hot water when he used ChatGPT in his research <a target=\"_blank\" href=\"https:\/\/www.reuters.com\/legal\/new-york-lawyers-sanctioned-using-fake-chatgpt-cases-legal-brief-2023-06-22\/\" >while writing a legal brief<\/a>. Unfortunately, ChatGPT had included fictitious case citations. The cases it cited simply did not exist.<\/p>\n<p class=\"article__block-KZIY9\" data-block=\"sciam\/paragraph\">This isn\u2019t rare or <a target=\"_blank\" href=\"https:\/\/www.nature.com\/articles\/s41586-024-07421-0\" >anomalous<\/a>. To understand why, it\u2019s worth thinking a bit about <a target=\"_blank\" href=\"https:\/\/www.scientificamerican.com\/article\/how-does-chatgpt-think-psychology-and-neuroscience-crack-open-ai-large\/\" >how these programs work<\/a>. OpenAI\u2019s ChatGPT, Google\u2019s Gemini chatbot and Meta\u2019s Llama all work in structurally similar ways. At their core is an LLM\u2014a large language model. These models all make predictions about language. Given some input, ChatGPT will make some prediction about what should come next or what is an appropriate response. It does so through an analysis of enormous amounts of text (its \u201ctraining data\u201d). In ChatGPT\u2019s case, the <a target=\"_blank\" href=\"https:\/\/sitn.hms.harvard.edu\/flash\/2023\/the-making-of-chatgpt-from-data-to-dialogue\/\" >initial training data<\/a> included billions of pages of text from the Internet.<\/p>\n<p class=\"article__block-KZIY9\" data-block=\"sciam\/paragraph\">From those training data, the LLM predicts, from some text fragment or prompt, what should come next. It will arrive at a list of the most likely words (technically, <a target=\"_blank\" href=\"https:\/\/link.springer.com\/article\/10.1007\/s00146-023-01710-4\" >linguistic tokens<\/a>) to come next, then select one of the leading candidates. Allowing for it not to choose the most likely word each time allows for more creative (and more human-sounding) language. The parameter that sets how much deviation is permitted is known as the \u201ctemperature.\u201d Later in the process, <a target=\"_blank\" href=\"https:\/\/theconversation.com\/chatgpt-and-other-language-ais-are-nothing-without-humans-a-sociologist-explains-how-countless-hidden-people-make-the-magic-211658\" >human trainers refine predictions<\/a> by judging whether the outputs constitute sensible speech. Extra restrictions may also be placed on the program to avoid problems (such as <a target=\"_blank\" href=\"https:\/\/www.scientificamerican.com\/article\/even-chatgpt-says-chatgpt-is-racially-biased\/\" >ChatGPT saying racist things<\/a>), but this token-by-token prediction is the idea that underlies all of this technology.<\/p>\n<p class=\"article__block-KZIY9\" data-block=\"sciam\/paragraph\">Now, we can see from this description that nothing about the modeling ensures that the outputs accurately depict anything in the world. There is not much reason to think that the outputs are connected to any sort of internal representation at all. A well-trained chatbot will produce humanlike text, but nothing about the process checks that the text is true, which is why we strongly doubt an LLM really understands what it says.<\/p>\n<p class=\"article__block-KZIY9\" data-block=\"sciam\/paragraph\">So sometimes ChatGPT says false things. In recent years, as we have been becoming accustomed to AI, people have started to refer to these falsehoods as \u201c<a target=\"_blank\" href=\"https:\/\/www.theguardian.com\/books\/2023\/nov\/15\/hallucinate-cambridge-dictionary-word-of-the-year\" >AI hallucinations<\/a>.\u201d While this language is metaphorical, we think it\u2019s not a good metaphor.<\/p>\n<p class=\"article__block-KZIY9\" data-block=\"sciam\/paragraph\">Consider Shakespeare\u2019s paradigmatic hallucination in which <a target=\"_blank\" href=\"https:\/\/link.springer.com\/article\/10.1007\/s11229-014-0492-4\" >Macbeth sees a dagger<\/a> floating toward him. What\u2019s going on here? Macbeth is trying to use his perceptual capacities in his normal way, but something has gone wrong. And his perceptual capacities are almost always reliable\u2014he doesn\u2019t usually see daggers randomly floating about! Normally his vision is useful in representing the world, and it is good at this because of its connection to the world.<\/p>\n<p class=\"article__block-KZIY9\" data-block=\"sciam\/paragraph\">Now think about ChatGPT. Whenever it says anything, it is simply trying to produce humanlike text. The goal is simply to make something that sounds good. This is never directly tied to the world. When it goes wrong, it isn\u2019t because it hasn\u2019t succeeded in representing the world this time; it never tries to represent the world! Calling its falsehoods \u201challucinations\u201d doesn\u2019t capture this feature.<\/p>\n<p class=\"article__block-KZIY9\" data-block=\"sciam\/paragraph\">Instead we suggest, <a target=\"_blank\" href=\"https:\/\/link.springer.com\/article\/10.1007\/s10676-024-09775-5\" >in a June report<\/a> in <i>Ethics and Information Technology<\/i>, that a better term is \u201cbullshit.\u201d As mentioned, a bullshitter just doesn\u2019t care whether what they say is true.<\/p>\n<p class=\"article__block-KZIY9\" data-block=\"sciam\/paragraph\">So if we do regard ChatGPT as engaging in a conversation with us\u2014though <a target=\"_blank\" href=\"https:\/\/journals.publishing.umich.edu\/ergo\/article\/id\/4668\/\" >even this might be a bit of a pretense<\/a>\u2014then it seems to fit the bill. As much as it intends to do anything, it intends to produce convincing humanlike text. It isn\u2019t trying to say things about the world. It\u2019s just bullshitting. And crucially, it\u2019s bullshitting even when it says true things!<\/p>\n<p class=\"article__block-KZIY9\" data-block=\"sciam\/paragraph\">Why does this matter? Isn\u2019t \u201challucination\u201d just a nice metaphor here? Does it really matter if it\u2019s not apt? We think it does matter for at least three reasons:<\/p>\n<p class=\"article__block-KZIY9\" data-block=\"sciam\/paragraph\"><strong>First,<\/strong> the terminology we use affects public understanding of technology, which is important in itself. If we use misleading terms, people are more likely to misconstrue how the technology works. We think this in itself is a bad thing.<\/p>\n<p class=\"article__block-KZIY9\" data-block=\"sciam\/paragraph\"><strong>Second,<\/strong> how we describe technology affects our relationship with that technology and how we think about it. And this can be harmful. Consider people who have been lulled into a false of security by <a target=\"_blank\" href=\"https:\/\/www.theguardian.com\/technology\/2018\/jan\/24\/self-driving-cars-dangerous-period-false-security\" >\u201cself-driving\u201d cars<\/a>. We worry that talking of AI \u201challucinating\u201d\u2014a term usually used for human psychology\u2014risks anthropomorphizing the chatbots. <a target=\"_blank\" href=\"https:\/\/academic.oup.com\/book\/39707\/chapter-abstract\/339718866?redirectedFrom=fulltext&amp;login=false\" >The ELIZA effect<\/a> (named after a chatbot from the 1960s) occurs when people attribute human features to computer programs. We saw this in extremis in the case of the <a target=\"_blank\" href=\"https:\/\/www.scientificamerican.com\/article\/google-engineer-claims-ai-chatbot-is-sentient-why-that-matters\/\" >Google employee who came to believe that one of the company\u2019s chatbots was sentient<\/a>. Describing ChatGPT as a bullshit machine (even if it\u2019s a very impressive one) helps mitigate this risk.<\/p>\n<p class=\"article__block-KZIY9\" data-block=\"sciam\/paragraph\"><strong>Third,<\/strong> if we attribute agency to the programs, this may shift blame away from those using ChatGPT, or its programmers, when things go wrong. If, as appears to be the case, this kind of technology will increasingly be used in important matters <a target=\"_blank\" href=\"https:\/\/www.ncbi.nlm.nih.gov\/pmc\/articles\/PMC10552880\/\" >such as health care<\/a>, it is crucial that we know who is responsible when things go wrong.<\/p>\n<p class=\"article__block-KZIY9\" data-block=\"sciam\/paragraph\">So, next time you see someone describing an AI making something up as a \u201challucination,\u201d call it bullshit!<\/p>\n<p data-block=\"sciam\/paragraph\">_____________________________________________________<\/p>\n<p style=\"padding-left: 40px;\" data-block=\"sciam\/paragraph\"><b><a target=\"_blank\" href=\"https:\/\/www.scientificamerican.com\/author\/joe-slater\/\" class=\"bioLink-kqdDv\" ><em>Joe Slater<\/em><\/a><\/b><em> is a lecturer in moral and political philosophy at the University of Glasgow.<\/em><\/p>\n<p style=\"padding-left: 40px;\" data-block=\"sciam\/paragraph\"><em><b><a target=\"_blank\" href=\"https:\/\/www.scientificamerican.com\/author\/james-humphries\/\" class=\"bioLink-kqdDv\" >James Humphries<\/a><\/b> is a lecturer in political theory at the University of Glasgow.<\/em><\/p>\n<p style=\"padding-left: 40px;\" data-block=\"sciam\/paragraph\"><em><b><a target=\"_blank\" href=\"https:\/\/www.scientificamerican.com\/author\/michael-townsen-hicks\/\" class=\"bioLink-kqdDv\" >Michael Townsen Hicks<\/a><\/b> is a lecturer in philosophy of science and technology at the University of Glasgow.<\/em><\/p>\n<p data-block=\"sciam\/paragraph\"><a target=\"_blank\" href=\"https:\/\/www.scientificamerican.com\/article\/chatgpt-isnt-hallucinating-its-bullshitting\/\" >Go to Original &#8211; scientificamerican.com<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p>17 Jul 2024 &#8211; It\u2019s important that we use accurate terminology when discussing how AI chatbots make up information.<\/p>\n","protected":false},"author":4,"featured_media":270522,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[3078],"tags":[1733,3022,2994,1108],"class_list":["post-270521","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-artificial-intelligence-ai","tag-artificial-intelligence-ai","tag-chatbot","tag-chatgpt","tag-robots"],"_links":{"self":[{"href":"https:\/\/www.transcend.org\/tms\/wp-json\/wp\/v2\/posts\/270521","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.transcend.org\/tms\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.transcend.org\/tms\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.transcend.org\/tms\/wp-json\/wp\/v2\/users\/4"}],"replies":[{"embeddable":true,"href":"https:\/\/www.transcend.org\/tms\/wp-json\/wp\/v2\/comments?post=270521"}],"version-history":[{"count":1,"href":"https:\/\/www.transcend.org\/tms\/wp-json\/wp\/v2\/posts\/270521\/revisions"}],"predecessor-version":[{"id":270523,"href":"https:\/\/www.transcend.org\/tms\/wp-json\/wp\/v2\/posts\/270521\/revisions\/270523"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.transcend.org\/tms\/wp-json\/wp\/v2\/media\/270522"}],"wp:attachment":[{"href":"https:\/\/www.transcend.org\/tms\/wp-json\/wp\/v2\/media?parent=270521"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.transcend.org\/tms\/wp-json\/wp\/v2\/categories?post=270521"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.transcend.org\/tms\/wp-json\/wp\/v2\/tags?post=270521"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}