{"id":271943,"date":"2024-08-26T12:00:57","date_gmt":"2024-08-26T11:00:57","guid":{"rendered":"https:\/\/www.transcend.org\/tms\/?p=271943"},"modified":"2024-08-22T05:37:52","modified_gmt":"2024-08-22T04:37:52","slug":"how-artificial-intelligence-challenges-the-concept-of-authorship","status":"publish","type":"post","link":"https:\/\/www.transcend.org\/tms\/2024\/08\/how-artificial-intelligence-challenges-the-concept-of-authorship\/","title":{"rendered":"How Artificial Intelligence Challenges the Concept of Authorship"},"content":{"rendered":"<div id=\"attachment_244980\" style=\"width: 610px\" class=\"wp-caption aligncenter\"><a href=\"https:\/\/www.transcend.org\/tms\/wp-content\/uploads\/2023\/09\/artificial-intelligence-ai.jpg\" ><img loading=\"lazy\" decoding=\"async\" aria-describedby=\"caption-attachment-244980\" class=\"wp-image-244980\" src=\"https:\/\/www.transcend.org\/tms\/wp-content\/uploads\/2023\/09\/artificial-intelligence-ai-1024x435.jpg\" alt=\"\" width=\"600\" height=\"255\" srcset=\"https:\/\/www.transcend.org\/tms\/wp-content\/uploads\/2023\/09\/artificial-intelligence-ai-1024x435.jpg 1024w, https:\/\/www.transcend.org\/tms\/wp-content\/uploads\/2023\/09\/artificial-intelligence-ai-300x128.jpg 300w, https:\/\/www.transcend.org\/tms\/wp-content\/uploads\/2023\/09\/artificial-intelligence-ai-768x327.jpg 768w, https:\/\/www.transcend.org\/tms\/wp-content\/uploads\/2023\/09\/artificial-intelligence-ai.jpg 1030w\" sizes=\"auto, (max-width: 600px) 100vw, 600px\" \/><\/a><p id=\"caption-attachment-244980\" class=\"wp-caption-text\">Illustration: geralt\/pixabay<\/p><\/div>\n<blockquote><p><em>If AI creates the content, who owns the work? Answering this complex question is crucial to understanding the legal and ethical implications of AI-generated content.<\/em><\/p><\/blockquote>\n<p><em>20 Aug 2024 &#8211;<\/em> Producing art and text using computers is not new. It has been happening since the 1970s. What is new is that computers are acting independently\u2014without programmers providing any input; the computer program generates the work, even if programmers have set the parameters.<\/p>\n<p>Not only are computers acting more independently but the quality of the content being generated has also increased. How this content is used has changed, too, and it may not always be created with the best motives. This is the new frontier of artificial intelligence or AI.<\/p>\n<p>Coursera, a for-profit open online course provider, <a target=\"_blank\" href=\"https:\/\/www.coursera.org\/articles\/what-is-artificial-intelligence\" >stated<\/a>, \u201cArtificial intelligence is the theory and development of computer systems capable of performing tasks that historically required human intelligence, such as recognizing speech, making decisions, and identifying patterns. AI is an umbrella term encompassing various technologies, including <a target=\"_blank\" href=\"https:\/\/www.coursera.org\/articles\/what-is-machine-learning\" >machine learning<\/a>, <a target=\"_blank\" href=\"https:\/\/www.coursera.org\/articles\/what-is-deep-learning\" >deep learning<\/a>, and <a target=\"_blank\" href=\"https:\/\/www.coursera.org\/articles\/natural-language-processing\" >natural language processing<\/a>.\u201d<\/p>\n<p>The \u201c<a target=\"_blank\" href=\"https:\/\/crsreports.congress.gov\/product\/pdf\/LSB\/LSB10922\" >Generative Artificial Intelligence and Copyright Law<\/a>\u201d report by the Congressional Research Service offers a more specific perspective: \u201cSo-called \u2018generative AI\u2019 computer programs\u2014such as OpenAI\u2019s <a target=\"_blank\" href=\"https:\/\/openai.com\/index\/dall-e-3\/\" >DALL-E<\/a> and <a target=\"_blank\" href=\"https:\/\/openai.com\/index\/chatgpt\/\" >ChatGPT<\/a> programs, Stability AI\u2019s <a target=\"_blank\" href=\"https:\/\/stablediffusionweb.com\/\" >Stable Diffusion<\/a> programs, and <a target=\"_blank\" href=\"https:\/\/www.midjourney.com\/home\" >Midjourney\u2019s self-titled program<\/a>\u2014can generate new images, texts, and other content (or \u2018outputs\u2019) in response to a user\u2019s textual prompts (or \u2018inputs\u2019).\u201d<\/p>\n<p>These AI programs are generated by exposing them to staggeringly large quantities of existing texts, photos, paintings, and other artworks. For example, generative pretrained transformers (GPTs) are a type of large language model (LLM) that use massive datasets comprising articles, books, and essays available on the internet to generate any kind of text. (Paul McDonagh-Smith, a senior lecturer in information technology at the MIT Sloan School of Management, <a target=\"_blank\" href=\"https:\/\/www.forbes.com\/sites\/joemckendrick\/2023\/08\/08\/why-gpt-should-stand-for-general-purpose-technology-for-all\/?sh=35b250236641\" >suggested<\/a> a less technical meaning for the acronym: General purpose technology.)<\/p>\n<p>Programmers create generative AI platforms by searching for patterns and relationships in these vast archives of images and text. Then, the same process used in autofills creates rules and makes judgments and predictions when responding to a prompt or input.<\/p>\n<p>But who has the right to the results or the output? Does copyright, patent, or trademark apply to AI creations? Who owns the content that AI platforms produce for a company or its customers?<\/p>\n<p>Is using LLMs and scraping the internet for texts and images\u2014the term applied to harvesting content online\u2014fair use, as the AI companies claim, or do these companies require permission and owe royalties to the content owners?<\/p>\n<p>Put another way, would it make more sense to confer <a target=\"_blank\" href=\"https:\/\/www.wipo.int\/wipo_magazine\/en\/2017\/05\/article_0003.html\" >copyright on a pen manufacturer for a book<\/a> rather than the writer who used the pen to write it? In digital terms, it\u2019s evident that Microsoft Word can\u2019t assert copyright over texts created using the program. Why should AI be any different? As it turns out, the answer to this question isn\u2019t straightforward.<\/p>\n<p><strong>An Uncertain Legal Situation<\/strong><\/p>\n<p><a target=\"_blank\" href=\"https:\/\/www.aaas.org\/ai2\/projects\/law\/judicialpapers\" >Courts have yet to consider<\/a> how fair use standards apply to AI tools.<\/p>\n<p>\u201c[T]here isn\u2019t a clear answer to whether or not in the United States that is copyright infringement or whether it\u2019s fair use,\u201d <a target=\"_blank\" href=\"https:\/\/www.nytimes.com\/2023\/12\/30\/business\/media\/copyright-law-ai-media.htm\" >stated<\/a> Ryan Abbott, a lawyer at Brown Neri Smith &amp; Khan. In an interview with the New York Times, he said, \u201cIn the meantime, we have lots of lawsuits moving forward with potentially billions of dollars at stake.\u201d<\/p>\n<p>Because the lawsuits raising these questions are in the early stages of litigation, it could be years before a federal district court rules on the matter or these cases go to the Supreme Court. Regulators have yet to make definitive rulings on the rights and responsibilities of AI companies using original content or about the creators of that content.<\/p>\n<p><strong>What U.S. Copyright Law Says<\/strong><\/p>\n<p>The Copyright Office has <a target=\"_blank\" href=\"https:\/\/www.wipo.int\/wipo_magazine\/en\/2017\/05\/article_0003.html\" >adopted an official policy<\/a> declaring that it will \u201cregister an original work of authorship, provided that the work was created by a human being.\u201d This leads to the question of whether or not AI-generated content can be considered to be created by a human being. In one sense, it is, yet the program usually generates content that no human being is responsible for, leaving the question largely unanswered.<\/p>\n<p>To answer this question, we must consider <a target=\"_blank\" href=\"https:\/\/www.insidehighered.com\/news\/tech-innovation\/2023\/08\/22\/ai-raises-complicated-questions-about-authorship\" >the concept of authorship<\/a>. <a target=\"_blank\" href=\"https:\/\/www.law.cornell.edu\/wex\/intellectual_property_clause\" >Article I, Section 8<\/a> of the U.S. Constitution authorizes Congress to \u201c[secure] for limited times to authors\u2026 the exclusive right to their\u2026 writings.\u201d That means that the Copyright Act affords copyright protection to \u201coriginal works of authorship.\u201d What constitutes authorship? Both the Constitution and Copyright Act are silent on that question.<\/p>\n<p>The September 2023 <a target=\"_blank\" href=\"https:\/\/crsreports.congress.gov\/product\/pdf\/LSB\/LSB10922\" >report<\/a> published by the Congressional Research Service suggested that the Copyright Office wasn\u2019t likely to find the requisite human authorship where an AI program generates works in response to text prompts.<\/p>\n<p>However, we must consider the human creativity required to design AI software. Programmers may make creative choices in coding and training the AI software, giving them a stronger claim to some form of authorship. Would the programmers\u2019 contributions warrant copyright protection? Or would AI\u2014or rather the company that owns the AI program like Microsoft or OpenAI\u2014deserve the protection?<\/p>\n<p>The U.S. Copyright Office acknowledges that the advent of AI presents unprecedented difficulties that Congress must address. \u201c[W]e have concluded that a new law is needed,\u201d <a target=\"_blank\" href=\"https:\/\/copyright.gov\/ai\/Copyright-and-Artificial-Intelligence-Part-1-Digital-Replicas-Report.pdf\" >stated<\/a> a July 2024 U.S. Copyright Office report \u201cCopyright and Artificial Intelligence.\u201d \u201cThe speed, precision, and scale of AI-created digital replicas [call] for prompt federal action. Without a robust nationwide remedy, their unauthorized publication and distribution threaten substantial harm\u2026 in the entertainment and political arenas.\u201d<\/p>\n<p>The report proposes adopting a new federal law that protects all individuals, not just celebrities or public figures, against creating and distributing their digital likenesses without consent. It calls for online service providers to \u201cremove unauthorized digital replicas\u201d upon receiving \u201ceffective notice.\u201d The report also gives individuals the right to \u201clicense and monetize\u201d their digital replica rights. The agency acknowledges that First Amendment concerns need to be accounted for in any new statute. The proposed reforms would also protect \u201cagainst AI outputs that deliberately imitate an artist\u2019s style,\u201d but any new law would not define what this style constitutes.<\/p>\n<p><strong>How Other Countries Protect Content<\/strong><\/p>\n<p>Cases in other countries offer few valuable precedents. In March 2012, for example, in an Australian case (<a target=\"_blank\" href=\"http:\/\/www.austlii.edu.au\/au\/cases\/cth\/FCAFC\/2012\/16.html\" ><em>Acohs Pty Ltd<\/em><\/a><a target=\"_blank\" href=\"http:\/\/www.austlii.edu.au\/au\/cases\/cth\/FCAFC\/2012\/16.html\" > v. <\/a><a target=\"_blank\" href=\"http:\/\/www.austlii.edu.au\/au\/cases\/cth\/FCAFC\/2012\/16.html\" ><em>Ucorp Pty Ltd<\/em><\/a>), a court found that a work generated by a computer could not be protected by copyright law because a human did not produce it.<\/p>\n<p>In 2009, the Court of Justice of the European Union declared in the <a target=\"_blank\" href=\"https:\/\/eur-lex.europa.eu\/legal-content\/EN\/TXT\/?uri=CELEX%3A62008CJ0005\" >Infopaq decision<\/a> \u201cthat copyright only applies to original works, and that originality must reflect the \u2018author\u2019s own intellectual creation,\u2019\u201d stated WIPO magazine.<\/p>\n<p>Courts in other countries\u2014India, Ireland, New Zealand, and Hong Kong\u2014are more favorable to the programmer as the \u201cauthor.\u201d <a target=\"_blank\" href=\"https:\/\/www.wipo.int\/wipo_magazine\/en\/2017\/05\/article_0003.html\" >Copyright law<\/a> in the United Kingdom appears to hedge its bets: \u201cIn the case of a literary, dramatic, musical or artistic work which is computer-generated, the author shall be taken to be the person by whom the arrangements necessary for the creation of the work are undertaken,\u201d <a target=\"_blank\" href=\"https:\/\/www.wipo.int\/wipo_magazine\/en\/2017\/05\/article_0003.html\" >added<\/a> the article.<\/p>\n<p><strong>Lack of Clarity on What Constitutes Infringement<\/strong><\/p>\n<p>The generative process of making large language models, image-producing programs like DALL-E, music composition, and voice recognition require training. AI can only generate something with this training, which invariably involves making digital copies of existing works.<\/p>\n<p>According to the U.S. Patent and Trademark Office, this <a target=\"_blank\" href=\"https:\/\/crsreports.congress.gov\/product\/pdf\/LSB\/LSB10922\" >process<\/a> \u201cwill almost by definition involve the reproduction of entire works or substantial portions thereof.\u201d For instance, OpenAI <a target=\"_blank\" href=\"https:\/\/www.uspto.gov\/sites\/default\/files\/documents\/OpenAI_RFC-84-FR-58141.pdf\" >accepts<\/a> that its programs are trained on \u201clarge, publicly available datasets that include copyrighted works.\u201d<\/p>\n<p>Whether or not copying constitutes fair use depends on four statutory factors under <a target=\"_blank\" href=\"https:\/\/www.law.cornell.edu\/uscode\/text\/17\/107\" >17 U.S.C. \u00a7 107<\/a>, according to Cornell Law School:<\/p>\n<ol>\n<li>\u201c[T]he purpose and character of the use, including whether such use is of a commercial nature or is for nonprofit educational purposes;<\/li>\n<li>[T]he nature of the copyrighted work;<\/li>\n<li>[T]he amount and substantiality of the portion used in relation to the copyrighted work as a whole;<\/li>\n<li>[T]he effect of the use upon the potential market for or value of the copyrighted work.\u201d<\/li>\n<\/ol>\n<p>Depending on the jurisdiction, different federal circuit courts may respond with varying interpretations of the <a target=\"_blank\" href=\"https:\/\/www.lawfaremedia.org\/article\/ai-generated-works-artists-and-intellectual-property\" >fair use doctrine<\/a>, which allows copyrighted work to be used without the owner\u2019s permission \u201cfor purposes such as criticism (including satire), comment, news reporting, teaching (including multiple copies for classroom use), scholarship, or research,\u201d according to the nonprofit publication Lawfare. This is called transformative use under the doctrine and lets a person \u201cexploit\u201d copyrighted material in a way it was not originally intended.<\/p>\n<p>In a <a target=\"_blank\" href=\"https:\/\/committees.parliament.uk\/writtenevidence\/126981\/pdf\/\" >submission<\/a> to the House of Lords Communications and Digital Select Committee inquiry in December 2023, OpenAI said it could only train large language models, <a target=\"_blank\" href=\"https:\/\/www.theguardian.com\/technology\/2023\/mar\/14\/chat-gpt-4-new-model\" >such as its GPT-4 model<\/a>, by accessing copyrighted work. \u201cBecause copyright today covers virtually every sort of human expression\u2014including blog posts, photographs, forum posts, scraps of software code, and government documents\u2014it would be impossible to train today\u2019s leading AI models without using copyrighted materials.\u201d<\/p>\n<p>According to the congressional <a target=\"_blank\" href=\"https:\/\/crsreports.congress.gov\/product\/pdf\/LSB\/LSB10922\" >report<\/a>, \u201cOpenAI <a target=\"_blank\" href=\"https:\/\/www.uspto.gov\/sites\/default\/files\/documents\/OpenAI_RFC-84-FR-58141.pdf#page=5\" >argues<\/a> that its purpose is \u2018transformative\u2019 as opposed to \u2018expressive\u2019 because the training process creates \u2018a useful generative AI system\u2019\u201d and further contends that fair use is applicable because the content it uses is intended exclusively to train its programs and is not shared with the public. If a work is considered \u201ctransformative\u201d based on OpenAI interpretation, it has to be significantly altered from the original so it is not viewed as an imitation.<\/p>\n<p>Meanwhile, OpenAI, which has created tools like <a target=\"_blank\" href=\"https:\/\/www.theguardian.com\/technology\/2023\/feb\/02\/chatgpt-100-million-users-open-ai-fastest-growing-app\" >its groundbreaking chatbot<\/a>, ChatGPT, stated that it would be impossible without access to copyrighted material. However, the company insists that it has taken steps to avoid the possibility of infringement, asserting, for example, that its visual art program <a target=\"_blank\" href=\"https:\/\/openai.com\/dall-e-3\" >DALL-E 3<\/a> \u201cis designed to decline requests that ask for an image in the style of a living artist.\u201d<\/p>\n<p>The AI company also <a target=\"_blank\" href=\"https:\/\/www.theguardian.com\/technology\/2024\/jan\/08\/ai-tools-chatgpt-copyrighted-material-openai\" >maintains<\/a> that it needs to use copyrighted materials to produce a relevant system: \u201cLimiting training data to public domain books and drawings created more than a century ago might yield an interesting experiment, but would not provide AI systems that meet the needs of today\u2019s citizens,\u201d stated a January 2024 article in the Guardian.<\/p>\n<p>As a legal precedent, the company cites the <a target=\"_blank\" href=\"https:\/\/scholar.google.com\/scholar_case?case=2220742578695593916&amp;q=authors+guild+v+google&amp;hl=en&amp;as_sdt=20006\" ><em>Authors Guild, Inc. v. Google, Inc.<\/em><\/a>, \u201cin which the U.S. Court of Appeals for the Second Circuit held that Google\u2019s copying of entire books to create a searchable database that displayed excerpts of those books constituted fair use,\u201d the congressional <a target=\"_blank\" href=\"https:\/\/crsreports.congress.gov\/product\/pdf\/LSB\/LSB10922\" >report<\/a> stated.<\/p>\n<p>Unsurprisingly, <a target=\"_blank\" href=\"https:\/\/news.artnet.com\/art-world\/openai-says-creating-ai-is-impossible-without-copyrighted-material-2417327\" >OpenAI\u2019s position<\/a> has met with considerable criticism. \u201cWe won\u2019t get fabulously rich if you don\u2019t let us steal, so please don\u2019t make stealing a crime!\u201d <a target=\"_blank\" href=\"https:\/\/twitter.com\/GaryMarcus\/status\/1744362345403392510\" >wrote<\/a> AI skeptic Gary Marcus on the social media site X (formerly known as Twitter). \u201cSure, Netflix might pay billions a year in licensing fees, but \u2018we\u2019 (OpenAI) shouldn\u2019t have to!\u201d<\/p>\n<p><strong>When Is a Piece of Work Too Similar?<\/strong><\/p>\n<p>Copyright owners have to adhere to high standards to demonstrate that the production of an AI program has infringed their rights; for example, if a painter maintains that a DALL-E image bears an uncanny resemblance to their work, it would lead to copyright infringement \u201cif the AI program both 1) had access to their works and 2) created \u2018substantially similar\u2019 outputs,\u201d the <a target=\"_blank\" href=\"https:\/\/crsreports.congress.gov\/product\/pdf\/LSB\/LSB10922\" >report<\/a> stated.<\/p>\n<p>\u201cCourts have variously <a target=\"_blank\" href=\"https:\/\/crsreports.congress.gov\/product\/pdf\/LSB\/LSB10922\" >described the test<\/a> as requiring, for example, that the works have \u2018a substantially similar <a target=\"_blank\" href=\"https:\/\/scholar.google.com\/scholar_case?case=4721891391972530140&amp;q=919+f.2d+1353&amp;hl=en&amp;as_sdt=20006\" >total concept and feel<\/a>\u2019 or \u2018<a target=\"_blank\" href=\"https:\/\/scholar.google.com\/scholar_case?case=3810785577629247042&amp;q=273+F.3d+262&amp;hl=en&amp;as_sdt=20006\" >overall look and feel<\/a>\u2019 or that \u2018the <a target=\"_blank\" href=\"https:\/\/scholar.google.com\/scholar_case?case=6978867242855141719&amp;q=338+F.2d+949&amp;hl=en&amp;as_sdt=20006\" >ordinary reasonable person<\/a> would fail to differentiate between the two works,\u2019\u201d <a target=\"_blank\" href=\"https:\/\/crsreports.congress.gov\/product\/pdf\/LSB\/LSB10922\" >added<\/a> the report.<\/p>\n<p>Leading cases have also pointed out that such deduction should consider \u201c<a target=\"_blank\" href=\"https:\/\/scholar.google.com\/scholar_case?case=660862405243129598&amp;q=388+F.3d+1189&amp;hl=en&amp;as_sdt=20006\" >the qualitative and quantitative significance<\/a> of the copied portion in relation to the plaintiff\u2019s work as a whole.\u201d However, the painter might be able to prove that an image was scraped off the internet to \u201ctrain\u201d the program, resulting in an image similar to their original creation in most, if not all, respects.<\/p>\n<p>In OpenAI\u2019s words, though, any allegation of infringement by a copyright holder would be \u201c<a target=\"_blank\" href=\"https:\/\/www.uspto.gov\/sites\/default\/files\/documents\/OpenAI_RFC-84-FR-58141.pdf#page=11\" >an unlikely accidental outcome<\/a>.\u201d<\/p>\n<p>Courts have been asked to clarify what a \u201cderivative work\u201d is <a target=\"_blank\" href=\"https:\/\/hbr.org\/2023\/04\/generative-ai-has-an-intellectual-property-problem\" >under intellectual property laws<\/a>. Alternatively, some AI programs may be used to create works involving existing fictional characters, which sometimes enjoy <a target=\"_blank\" href=\"https:\/\/www.nolo.com\/legal-encyclopedia\/protecting-fictional-characters-under-copyright-law.html\" >copyright protection<\/a> in and of themselves. An AI program may also be prompted to create artistic or literary works \u201c<a href=\"http:\/\/mhnlakgilnojmhinhkckjpncpbhabphi\/pages\/pdf\/web\/viewer.html?file=https%3A%2F%2Fcrsreports.congress.gov%2Fproduct%2Fpdf%2FLSB%2FLSB10922\" >in the style of<\/a>\u201d a particular artist or author. However, emulation of an artist or author\u2019s style does not violate copyright law.<\/p>\n<p>These cases also raise the possibility that users of the images and text generated by AI companies, which infringe on the copyrights of existing works, may also be liable, in addition to the AI companies that produced them. (Legal penalties were imposed on users who downloaded music illegally from the now-defunct <a target=\"_blank\" href=\"https:\/\/en.wikipedia.org\/wiki\/Metallica_v._Napster,_Inc.\" >Napster<\/a>.)<\/p>\n<p>The AI company could potentially face liability under the doctrine of \u201c<a target=\"_blank\" href=\"https:\/\/www.law.cornell.edu\/wex\/vicarious_infringement\" >vicarious infringement<\/a>,\u201d which pertains to defendants who have \u201cthe right and ability to control the infringing activities\u201d and \u201ca direct financial interest in such activities.\u201d Of course, users might be innocent of any wrongdoing if they did not prompt the program with any awareness of what they would obtain. How would the owner of a copyrighted work then establish infringement?<\/p>\n<p>For example, <a target=\"_blank\" href=\"https:\/\/openai.com\/policies\/terms-of-use\" >OpenAI\u2019s terms of use<\/a> seem to let the company off the hook by assigning any blame for copyright issues on the user: \u201cWe hereby assign to you all our right, title, and interest, if any, in and to Output.\u201d Andres Guadamuz, an intellectual property law professor at the University of Sussex, <a target=\"_blank\" href=\"https:\/\/www.technollama.co.uk\/dall%C2%B7e-goes-commercial-but-what-about-copyright\" >wrote<\/a> in July 2022 that OpenAI appears to \u201ccleverly bypass most copyright questions through contract.\u201d<\/p>\n<p>In September 2023, a U.S. district court stated that a jury trial would be needed to determine whether it was fair use for an AI company to copy case summaries from <a target=\"_blank\" href=\"https:\/\/www.jdsupra.com\/legalnews\/ai-versus-westlaw-copyright-bellwether-6131058\/\" >Westlaw<\/a>\u2014a legal research platform owned by Thomson Reuters\u2014to train an AI program to quote pertinent passages from legal opinions in response to user questions.<\/p>\n<p>\u201c[B]y denying summary judgment on copyright infringement to the AI builder and user, the decision opens the door to the kind of lengthy, expensive and uncertain litigation that could deter builders and users of AI from using copyrighted works as training data,\u201d <a target=\"_blank\" href=\"https:\/\/www.mosessinger.com\/publications\/using-copyrighted-works-in-ai-training-data-may-infringe-even-if-the-ai-output-doesnt\" >according to<\/a> Moses Singer, a law firm based in New York.<\/p>\n<p><strong>Does Section 230 Exempt AI Companies From Responsibility?<\/strong><\/p>\n<p>Under <a target=\"_blank\" href=\"https:\/\/www.pbs.org\/newshour\/politics\/what-you-should-know-about-section-230-the-rule-that-shaped-todays-internet\" >Section 230 of the <\/a><a target=\"_blank\" href=\"https:\/\/www.pbs.org\/newshour\/politics\/what-you-should-know-about-section-230-the-rule-that-shaped-todays-internet\" >Communications Decency Act<\/a>\u2014which shields companies from hosting potentially litigious content posted by others and social media companies like X and Meta (the parent company of Facebook) that carry content, including ads of AI-generated actors\u2014\u201c<a target=\"_blank\" href=\"https:\/\/www.hollywoodreporter.com\/business\/business-news\/ai-copyright-law-studios-tech-actors-writers-1235638242\/\" >can claim immunity<\/a>.\u201d Since it was established in 1996, Section 230 has been invoked to justify why tech firms have significant legal protection from liability as third-party publishers.<\/p>\n<p><a target=\"_blank\" href=\"https:\/\/progresschamber.org\/\" >Chamber of Progress<\/a>, a tech industry coalition whose members include Amazon, Apple, and Meta, argued that Section 230 should be expanded to protect AI companies from some infringement claims. That raises the issue of whether Section 230\u2019s exemption can also cover advertising and publicity for intellectual property rights.<\/p>\n<p><strong>AI Companies Offer Their Justifications<\/strong><\/p>\n<p>Tech companies with AI products have advanced <a target=\"_blank\" href=\"https:\/\/www.theverge.com\/2023\/11\/4\/23946353\/generative-ai-copyright-training-data-openai-microsoft-google-meta-stabilityai\" >arguments justifying their methods<\/a>, including using copyrighted material to \u201ctrain\u201d their programs. Meta <a target=\"_blank\" href=\"https:\/\/www.documentcloud.org\/documents\/24117934-meta\" >asserts<\/a> that \u201ca first-of-its-kind licensing regime now\u201d will lead to chaos and send developers scrambling to identify many millions of rights holders \u201cfor very little benefit, given that any fair royalty due would be incredibly small in light of the insignificance of any one work among an Al training set.\u201d<\/p>\n<p>Google points out that there wouldn\u2019t be any copyright questions if training could occur without creating copies. It further <a target=\"_blank\" href=\"https:\/\/www.reuters.com\/legal\/litigation\/google-says-data-scraping-lawsuit-would-take-sledgehammer-generative-ai-2023-10-17\/\" >declares<\/a> that the act of \u201cknowledge harvesting\u201d\u2014like reading a book and learning information from it\u2014hasn\u2019t been considered an infringement by the courts. In that sense, Google is not doing anything different when it propagates AI outputs and makes them available to users.<\/p>\n<p>Microsoft <a target=\"_blank\" href=\"https:\/\/www.windowscentral.com\/microsoft\/microsoft-wants-you-to-be-sued-for-copyright-infringement-washes-its-hands-of-ai-copyright-misuse-and-says-users-should-be-liable-for-copyright-infringement\" >claims<\/a> that if the company were to obtain consent for accessible works to be used for training, AI innovation would be stifled. It would not be possible to <a target=\"_blank\" href=\"https:\/\/www.theverge.com\/2023\/11\/4\/23946353\/generative-ai-copyright-training-data-openai-microsoft-google-meta-stabilityai\" >attain<\/a> the \u201cscale of data necessary to develop responsible AI models even when the identity of a work and its owner is known.\u201d Licensing arrangements could also prevent startups and companies in smaller countries from training their own AI models.<\/p>\n<p><a target=\"_blank\" href=\"https:\/\/www.anthropic.com\/\" >Anthropic<\/a>, an AI company, echoes Microsoft\u2019s argument, <a target=\"_blank\" href=\"https:\/\/www.theverge.com\/2023\/11\/4\/23946353\/generative-ai-copyright-training-data-openai-microsoft-google-meta-stabilityai\" >maintaining<\/a> that \u201cappropriate limits to copyright\u201d are necessary \u201cto support creativity, innovation, and other values.\u201d<\/p>\n<p><a target=\"_blank\" href=\"https:\/\/a16z.com\/\" >Andreessen Horowitz<\/a>, a venture capital company with many tech investments, <a target=\"_blank\" href=\"https:\/\/www.theverge.com\/2023\/11\/4\/23946353\/generative-ai-copyright-training-data-openai-microsoft-google-meta-stabilityai\" >says<\/a> it has worked on the premise that the current copyright law allows any copying necessary to extract statistical facts to develop AI technologies. \u201cThose expectations have been a critical factor in the enormous investment of private capital into U.S.-based AI companies, which, in turn, has made the U.S. a global leader in AI.\u201d<\/p>\n<p>If these expectations are compromised, Andreessen Horowitz <a target=\"_blank\" href=\"https:\/\/gizmodo.com\/andreessen-horowitz-ai-copyright-office-ftc-1851005372\" >contends<\/a>, it could jeopardize future investment in AI and put the United States\u2019 economic prospects and national security at risk.<\/p>\n<p><a target=\"_blank\" href=\"https:\/\/huggingface.co\/\" >Hugging Face<\/a>, an AI company, <a target=\"_blank\" href=\"https:\/\/www.theverge.com\/2023\/11\/4\/23946353\/generative-ai-copyright-training-data-openai-microsoft-google-meta-stabilityai\" >asserts<\/a> that using a given work in training its models \u201cis of a broadly beneficial purpose\u201d\u2014namely, an AI model \u201ccapable of creating a wide variety of different sort of outputs wholly unrelated to that underlying, copyrightable expression.\u201d Like OpenAI and other tech companies, Hugging Face relies on the fair use doctrine in collecting content to build its models.<\/p>\n<p><strong>Art, Photos, and AI<\/strong><\/p>\n<p>Many companies with copyrighted content argue against the justification provided by tech companies for using their material under \u201cfair use.\u201d In February 2023, Getty, an image licensing service, filed a <a target=\"_blank\" href=\"https:\/\/www.reuters.com\/legal\/getty-images-lawsuit-says-stability-ai-misused-photos-train-ai-2023-02-06\/\" >lawsuit<\/a> against the creators of the AI art generator <a target=\"_blank\" href=\"https:\/\/stablediffusionlitigation.com\/\" >Stable Diffusion<\/a>, alleging \u201cbrazen infringement of Getty Images\u2019 intellectual property on a staggering scale.\u201d Getty Images stated that Stability AI, which owns Stable Diffusion, had copied 12 million images without permission, violating the copyright and trademark rights.<\/p>\n<p>Getty also <a target=\"_blank\" href=\"https:\/\/www.lexology.com\/library\/detail.aspx?g=051387ba-b805-4342-b7e1-2b98cb4f9b1c\" >dismissed any defense<\/a> that relied on fair use, arguing that Stable Diffusion produced commercial products that could jeopardize Getty\u2019s image marketing.<\/p>\n<p>Getty asserted that the images produced by the AI company\u2019s system were similar or derivative enough to constitute infringement. In another case filed in late 2022, <a target=\"_blank\" href=\"https:\/\/docs.justia.com\/cases\/federal\/district-courts\/california\/candce\/3:2023cv00201\/407208\/67\" ><em>Andersen et. al. v. Stability AI et. al.<\/em><\/a>, three artists filed a class-action complaint against <a target=\"_blank\" href=\"https:\/\/news.artnet.com\/art-world\/class-action-lawsuit-ai-generators-deviantart-midjourney-stable-diffusion-2246770\" >several generative AI platforms<\/a>, claiming that Stability AI had used their \u201coriginal works without license to train AI in their styles,\u201d <a target=\"_blank\" href=\"https:\/\/hbr.org\/2023\/04\/generative-ai-has-an-intellectual-property-problem\" >stated<\/a> a Harvard Business Review article. The software could generate images responding to users\u2019 prompts, which were insufficiently \u201ctransformative\u2026 and, as a result, would be unauthorized derivative works.\u201d In legal terms, the artists claimed that Stability AI was guilty of \u201c<a target=\"_blank\" href=\"https:\/\/www.hollywoodreporter.com\/business\/business-news\/artists-copyright-infringement-case-ai-art-generators-1235632929\/\" >vicarious infringement<\/a>.\u201d<\/p>\n<p>Stability AI <a target=\"_blank\" href=\"https:\/\/www.technologyreview.com\/2022\/12\/16\/1065247\/artists-can-now-opt-out-of-the-next-version-of-stable-diffusion\/\" >announced<\/a> in 2022 that artists could opt out of the next generation of the image generator, which was released to some developers for preview <a target=\"_blank\" href=\"https:\/\/stability.ai\/news\/stable-diffusion-3-api\" >in April 2024<\/a>. This is not only \u201ctoo little, too late\u201d but also puts the burden of intellectual property protection on the artists, not the company, since Stability AI will only make an exception for works created by artists who opted out.<\/p>\n<p>The practice of using original works is widespread. This fact was further highlighted in December 2023 when a database of artists whose works were used to train Midjourney\u2014a generative AI program\u2014was leaked online. The <a target=\"_blank\" href=\"https:\/\/hyperallergic.com\/864947\/database-of-artists-used-to-train-ai-leaks-to-the-public\/\" >database<\/a> listed 16,000 artists, including Keith Haring, Salvador Dal\u00ed, David Hockney, and Yayoi Kusama.<\/p>\n<p>Artists protested in various ways, <a target=\"_blank\" href=\"https:\/\/hyperallergic.com\/806026\/digital-artists-are-pushing-back-against-ai\/\" >posting<\/a> \u201cNo to AI-Generated Images\u201d on social media, adopting a <a target=\"_blank\" href=\"https:\/\/hyperallergic.com\/853520\/nightshade-helps-artists-protect-their-work-from-ai-scraping\/\" >tool<\/a> that \u201cpoisoned\u201d image-generating software, and filing <a target=\"_blank\" href=\"https:\/\/www.reuters.com\/legal\/litigation\/artists-take-new-shot-stability-midjourney-updated-copyright-lawsuit-2023-11-30\/\" >several lawsuits<\/a> accusing AI companies of infringing on intellectual property rights.<\/p>\n<p>One of these tools is called <a target=\"_blank\" href=\"https:\/\/nightshade.cs.uchicago.edu\/whatis.html\" >Nightshade<\/a>, whose website says that it is designed to \u201caddress\u201d the \u201cpower asymmetry\u201d between image owners and AI by transforming \u201cimages into \u2018poison\u2019 samples so that models training on them without consent will see their models learn unpredictable behaviors that deviate from expected norms,\u201d the software website stated.<\/p>\n<p>\u201cGenerative AI is hurting artists everywhere by stealing not only from our pre-existing work to build its libraries without consent, but our jobs too, and it doesn\u2019t even do it authentically or well,\u201d <a target=\"_blank\" href=\"https:\/\/hyperallergic.com\/865291\/ethical-questions-arise-after-ai-completes-keith-haring-painting\/\" >said<\/a> artist Brooke Peachley, according to a January 2024 article in Hyperallergic.<\/p>\n<p>Not all artists, however, oppose the use of AI in the creative process. In September 2022, the artist Kris Kashtanova <a target=\"_blank\" href=\"https:\/\/arstechnica.com\/information-technology\/2023\/02\/us-copyright-office-withdraws-copyright-for-ai-generated-comic-artwork\/\" >registered a copyright<\/a> for a graphic novel whose images were generated by Midjourney. In February 2023, the Copyright Office revoked the registration, arguing that Kashtanova had failed to reveal that an AI model had created the images for her novel.<\/p>\n<p>The Copyright Office <a target=\"_blank\" href=\"https:\/\/kleinfeldlp.com.ng\/legal-challenges-in-copyright-law-in-the-space-of-generative-ai-creativity\/\" >determined<\/a> that Midjourney, not Kashtanova, was responsible for the \u201cvisual material.\u201d A month later, guidance was <a target=\"_blank\" href=\"https:\/\/www.federalregister.gov\/documents\/2023\/03\/16\/2023-05321\/copyright-registration-guidance-works-containing-material-generated-by-artificial-intelligence\" >released<\/a> stating that when AI \u201cdetermines the expressive elements of its output, the generated material is not the product of human authorship.\u201d<\/p>\n<p>One of the artist\u2019s lawyers disagreed, <a target=\"_blank\" href=\"https:\/\/www.chiplawgroup.com\/copyright-office-denies-protection-for-ai-generated-images\/\" >stating<\/a> that the Copyright Act doesn\u2019t need such creative control and that original art can incorporate \u201c<a target=\"_blank\" href=\"https:\/\/crsreports.congress.gov\/product\/pdf\/LSB\/LSB10922\" >a degree of happenstance<\/a>.\u201d His position runs contrary to that of a law professor who said that a human user \u201cwho enters a text prompt into an AI program has\u2026 \u2018contributed nothing more than an idea\u2019 to the finished work,\u201d stated the Congressional Research Service report. As a result, the work produced by this idea cannot be copyrighted.<\/p>\n<p>In another case involving an inventor named Stephen Thaler, a federal judge in Washington, D.C., affirmed the policy adopted by the Copyright Office. In <a target=\"_blank\" href=\"https:\/\/www.nytimes.com\/2023\/08\/21\/arts\/design\/copyright-ai-artwork.htm\" >this case<\/a>, Thaler \u201clisted his computer system as the artwork\u2019s creator\u201d and wanted a copyright issued and given to him as the machine\u2019s owner. When the Copyright Office rejected his request, he sued the agency\u2019s director. Meanwhile, the judge ruled that an AI-generated artwork wasn\u2019t subject to copyright protection because it lacks \u201c<a href=\"http:\/\/mhnlakgilnojmhinhkckjpncpbhabphi\/pages\/pdf\/web\/viewer.html?file=https%3A%2F%2Fcrsreports.congress.gov%2Fproduct%2Fpdf%2FLSB%2FLSB10922\" >human involvement<\/a>.\u201d<\/p>\n<p>The Copyright Office also <a target=\"_blank\" href=\"https:\/\/www.theguardian.com\/technology\/2023\/sep\/24\/an-old-master-no-its-an-image-ai-just-knocked-up-and-it-cant-be-copyrighted\" >turned down<\/a> an artwork titled \u201cTh\u00e9\u00e2tre D\u2019op\u00e9ra Spatial\u201d by the artist Jason Michael Allen, whose piece won first prize at the Colorado State Fair in 2022. According to a September 2023 article in Wired, Allen <a target=\"_blank\" href=\"https:\/\/www.wired.com\/story\/ai-art-copyright-matthew-allen\/\" >vowed<\/a>, \u201cI\u2019m going to fight this like hell,\u201d declaring that he would file a suit against the federal government for denying him copyright protection even though he used Midjourney to create his work.<\/p>\n<p>The Copyright Office stated Allen was entitled to apply for copyright solely for the parts of the work he had altered using Adobe Photoshop software. \u201cThe underlying AI-generated work merely constitutes raw material which Mr. Allen has transformed through his artistic contributions,\u201d Allen <a target=\"_blank\" href=\"https:\/\/www.wired.com\/story\/ai-art-copyright-matthew-allen\/\" >wrote<\/a>. The Copyright Office was unpersuaded.<\/p>\n<p>Despite the Copyright Office\u2019s position and the artists\u2019 vehement opposition, some <a target=\"_blank\" href=\"https:\/\/www.christies.com\/en\/stories\/a-collaboration-between-two-artists-one-human-one-a-machine-0cd01f4e232f4279a525a446d60d4cd1\" >auction houses<\/a> and <a target=\"_blank\" href=\"https:\/\/www.museumnext.com\/article\/artificial-intelligence-and-the-future-of-museums\/\" >museums<\/a> have embraced AI. Several artists are happy to exhibit or sell their creations in these institutions. <a target=\"_blank\" href=\"https:\/\/aiartists.org\/mario-klingemann\" >German artist <\/a><a target=\"_blank\" href=\"https:\/\/aiartists.org\/mario-klingemann\" >Mario Klingemann<\/a>, who specializes in AI works, created a series of portraits under the title <a target=\"_blank\" href=\"https:\/\/www.sothebys.com\/en\/auctions\/ecatalogue\/2019\/contemporary-art-day-auction-l19021\/lot.109.html\" ><em>Memories of Passersby I<\/em><\/a>, exhibited in 2019 at Sotheby\u2019s, a premier auction house.<\/p>\n<p>For his work, Klingemann <a target=\"_blank\" href=\"https:\/\/www.theverge.com\/2019\/3\/5\/18251267\/ai-art-gans-mario-klingemann-auction-sothebys-technology\" >used<\/a> a type of AI program known as a <a target=\"_blank\" href=\"https:\/\/www.techtarget.com\/searchenterpriseai\/definition\/generative-adversarial-network-GAN\" >generative adversarial network<\/a> (GAN), which consists of two modules; the resulting images are bounced back and forth between the modules. In this case, the program was trained with exposure to a vast collection of portraits from the 17th, 18th, and 19th centuries, shortlisted by Klingemann. His was one of several AI-generated artworks that were put up for sale at Sotheby\u2019s.<\/p>\n<p>The Museum of Modern Art (MoMA) in New York has also exhibited AI-generated work, hosting the AI installation \u201c<a target=\"_blank\" href=\"https:\/\/www.moma.org\/calendar\/exhibitions\/5535\" >Unsupervised<\/a>\u201d in 2022. Assembled by the artist Refik Anadol, the work ponders what a machine might dream about after seeing more than 200 years of art in MoMA\u2019s collection. In the Hague, the Mauritshuis mounted an AI version of Johannes Vermeer\u2019s \u201c<a target=\"_blank\" href=\"https:\/\/nltimes.nl\/2023\/02\/22\/mauritshuis-hangs-artwork-created-ai-place-loaned-vermeer\" >Girl With a Pearl Earring<\/a>\u201d while the original was on loan.<\/p>\n<p><strong>Writers Confront AI<\/strong><\/p>\n<p>Like artists, writers have viewed AI warily, concerned that the ability of the software\u2014specifically ChatGPT\u2014to compose and draft essays, novels, and other forms of writing in response to user prompts could put them out of business. <a target=\"_blank\" href=\"https:\/\/www.publishersweekly.com\/pw\/by-topic\/digital\/content-and-e-books\/article\/93963-how-publishers-can-navigate-the-ai-revolution.html\" >Publishers Weekly<\/a>, which covers the publishing landscape, reminds readers that AI has existed for many years and is already integrated into much of the industry\u2019s software.<\/p>\n<p>The Authors Guild, as well as authors Paul Tremblay, Ta-Nehisi Coates, Michael Chabon, and comedian and writer Sarah Silverman, have filed multiple lawsuits against OpenAI and Meta, claiming the training process for AI programs infringed on their copyrights in written and visual works. In February 2024, however, a <a target=\"_blank\" href=\"https:\/\/www.courtlistener.com\/docket\/67538258\/104\/tremblay-v-openai-inc\/?campaign_id=4&amp;emc=edit_dk_20240214&amp;instance_id=115142&amp;nl\" >federal district court<\/a> threw out most of the arguments made in the copyright infringement lawsuits filed against OpenAI by these authors, <a target=\"_blank\" href=\"https:\/\/www.reuters.com\/legal\/litigation\/openai-gets-partial-win-authors-us-copyright-lawsuit-2024-02-13\/\" >stating<\/a> that the plaintiffs had failed to show examples where AI-generated output was \u201csubstantially similar\u2014or similar at all\u2014to their books.\u201d<\/p>\n<p>The ruling, which left the authors\u2019 central argument that the OpenAI system \u201c<a target=\"_blank\" href=\"https:\/\/www.nytimes.com\/2024\/02\/14\/business\/dealbook\/inflation-cpi-soft-landing.html\" >copied and ingested<\/a>\u201d their copyrighted work without permission or compensation, was similar to an earlier ruling in a lawsuit filed by authors against <a target=\"_blank\" href=\"https:\/\/www.wionews.com\/technology\/meta-flouted-copyrights-to-train-its-ai-llama-despite-warning-from-lawyers-claims-lawsuit-669148\" >Meta\u2019s generative AI system<\/a>, Llama. \u201cWhen I make a query of Llama, I\u2019m not asking for a copy of Sarah Silverman\u2019s book,\u201d the judge, in that case, <a target=\"_blank\" href=\"https:\/\/www.reuters.com\/legal\/litigation\/us-judge-trims-ai-copyright-lawsuit-against-meta-2023-11-09\/\" >wrote<\/a>, \u201cI\u2019m not even asking for an excerpt.\u201d<\/p>\n<p><a target=\"_blank\" href=\"https:\/\/www.wired.com\/story\/scammy-ai-generated-books-flooding-amazon\/?bxid=5be9f4663f92a404692f5fba&amp;cndid=26282935&amp;esrc\" >E-books, probably produced by AI<\/a> (with little or no human author involvement), have begun to appear on Amazon\u2019s online bookstore. AI researcher Melanie Mitchell was concerned that a book with the same title as hers\u2014<em>Artificial Intelligence: A Guide for Thinking Humans<\/em>, published in 2019\u2014had appeared on Amazon but was only 45 pages long, poorly written (though it contained some of Mitchell\u2019s original ideas), and attributed to one \u201cShumaila Majid.\u201d Despite not having an author bio or internet presence, a search brought up several other titles by \u201cMajid.\u201d<\/p>\n<p>An <a target=\"_blank\" href=\"https:\/\/www.wired.com\/story\/scammy-ai-generated-books-flooding-amazon\/?bxid=5be9f4663f92a404692f5fba&amp;cndid=26282935&amp;esrc\" >investigation by Wired magazine<\/a> using deepfake detection software revealed that Mitchell\u2019s suspicion was correct. The software found that the knockoff was 99 percent likely AI-generated. Amazon took down the Majid version, <a target=\"_blank\" href=\"https:\/\/www.wired.com\/story\/scammy-ai-generated-books-flooding-amazon\/\" >stating<\/a>: \u201cWhile we allow AI-generated content, we don\u2019t allow AI-generated content that violates our Kindle Direct Publishing content <a target=\"_blank\" href=\"https:\/\/kdp.amazon.com\/en_US\/help\/topic\/GU72M65VRFPH43L6\" >guidelines<\/a>, including content that creates a disappointing customer experience.\u201d<\/p>\n<p>AI-generated summaries of books, marketed as e-books, are another widespread phenomenon that has daunted writers. Computer scientist Fei-Fei Li, author of <a target=\"_blank\" href=\"https:\/\/www.amazon.com\/Worlds-See-Curiosity-Exploration-Discovery-ebook\/dp\/B0BPQSLVL6?ots=1&amp;tag=w050b-20&amp;linkCode=w50\" ><em>The Worlds I See: Curiosity, Exploration, and Discovery at the Dawn of AI<\/em><\/a>, <a target=\"_blank\" href=\"https:\/\/www.wired.com\/story\/scammy-ai-generated-books-flooding-amazon\/\" >found more than a dozen different summaries of her work on Amazon<\/a>, which she had nothing to do with.<\/p>\n<p>These e-books, which are summaries of original works, have been \u201cdramatically increasing in number,\u201d <a target=\"_blank\" href=\"https:\/\/janefriedman.com\/i-would-rather-see-my-books-pirated\/\" >said<\/a> Jane Friedman, a publishing expert, who herself was victimized by another \u201c<a target=\"_blank\" href=\"https:\/\/www.wired.com\/story\/scammy-ai-generated-books-flooding-amazon\/\" >AI-generated book scheme<\/a>.\u201d \u201cIt\u2019s common right now for a nonfiction author to celebrate the launch of their book, then within a few days discover one of these summaries for sale,\u201d <a target=\"_blank\" href=\"https:\/\/www.wired.com\/story\/scammy-ai-generated-books-flooding-amazon\/\" >wrote<\/a> Kate Knibbs, a senior writer at Wired, in January 2024.<\/p>\n<p>However, the writers of these summaries may not be liable for infringement. Some experts specializing in intellectual property believe summaries are legal because they don\u2019t copy \u201cword-for-word\u201d from the book they\u2019re summarizing. Other IP experts are more skeptical. \u201cSimply summarizing a book is harder to defend,\u201d <a target=\"_blank\" href=\"https:\/\/www.wired.com\/story\/scammy-ai-generated-books-flooding-amazon\/\" >said<\/a> <a target=\"_blank\" href=\"https:\/\/james.grimmelmann.net\/\" >James Grimmelmann<\/a>, an internet law professor at Cornell University. \u201cThere is still substantial similarity in the selection and arrangement of topics and probably some similarity in language.\u201d<\/p>\n<p>\u201cIt\u2019s disturbing to me, and on multiple moral levels seems wrong, to pull the heart and sensitivity out of the stories,\u201d <a target=\"_blank\" href=\"https:\/\/www.wired.com\/story\/scammy-ai-generated-books-flooding-amazon\/\" >said<\/a> author Sarah Stankorb, according to the Wired report. \u201cAnd the language\u2014it seemed like they just ran it through some sort of thesaurus program, and it came out really bizarre.\u201d<\/p>\n<p>She suspects that her book <a target=\"_blank\" href=\"https:\/\/www.amazon.com\/Disobedient-Women-Faithful-Evangelical-Reckoning\/dp\/1546003800?ots=1&amp;tag=w050b-20&amp;linkCode=w50\" ><em>Disobedient Women: How a Small Group of Faithful Women Exposed Abuse, Brought Down Powerful Pastors, and Ignited an Evangelical Reckoning<\/em><\/a> was summarized and posted on Amazon before publication, based on an advance copy of the book distributed only to reviewers. She found the imitation blatant when she compared the two texts. \u201cIn my early days reporting, I might do an interview with a mompreneur, then spend the afternoon poring over Pew Research Center stats on Americans disaffiliating from religion.\u201d<\/p>\n<p>That\u2019s the opening line from Stankorb\u2019s book. A summary version of that line stated: \u201cIn the early years of their reporting, they might conduct a mompreneur interview, followed by a day spent delving into Pew Research Center statistics about Americans who had abandoned their religious affiliations.\u201d The same software that Wired used to determine that AI generated Majid\u2019s e-book revealed that Stankorb\u2019s summary was as well.<\/p>\n<p>According to Dave Karpf, an associate professor of media at George Washington University, AI might not be as dangerous as people predict. \u201cI suspect\u2026 that 2024 will be the year we are reminded of the Ghost of Napster\u2014and other failed digital futures,\u201d he wrote in <a target=\"_blank\" href=\"https:\/\/foreignpolicy.com\/2023\/12\/31\/artificial-intelligence-ai-future-chatgpt-napster-internet\/\" >Foreign Policy magazine<\/a> in December 2023. \u201cThe story that I often hear from AI evangelists is that technologies such as ChatGPT are here, and they are inevitable.\u201d<\/p>\n<p>\u201cIf outdated copyright laws are at odds with the scraping behavior of large language models, copyright law will surely need to bend as a result,\u201d Karpf <a target=\"_blank\" href=\"https:\/\/foreignpolicy.com\/2023\/12\/31\/artificial-intelligence-ai-future-chatgpt-napster-internet\/\" >wrote<\/a>. But he believes that AI could be \u201canother Amazon,\u201d or it may turn out more like WeWork, \u201ca company that so heavily inflated its own revenue projections that it couldn\u2019t break even in today\u2019s rental market.\u201d<\/p>\n<p>\u201cCopyright law doesn\u2019t bend to accommodate your vision of the digital future\u2014the digital future bends to accommodate copyright law,\u201d Karpf <a target=\"_blank\" href=\"https:\/\/foreignpolicy.com\/2023\/12\/31\/artificial-intelligence-ai-future-chatgpt-napster-internet\/\" >added<\/a>.<\/p>\n<p><strong>AI-Generated Song Goes Viral<\/strong><\/p>\n<p>The controversy surrounding the <a target=\"_blank\" href=\"https:\/\/www.npr.org\/2023\/04\/21\/1171032649\/ai-music-heart-on-my-sleeve-drake-the-weeknd\" >AI-generated song \u201cHeart on My Sleeve,\u201d<\/a> using AI versions of the voices of rap star Drake and singer The Weeknd, raises some of the unprecedented issues posed by AI. While \u201cHeart\u201d received a lot of attention, it is only one in a spate of AI-generated songs with accompanying videos. An AI-generated version of Johnny Cash <a target=\"_blank\" href=\"https:\/\/futurism.com\/the-byte\/ai-johnny-cash-taylor-swift\" >singing<\/a> a Taylor Swift song went viral online in 2023.<\/p>\n<p>After its release in April 2023, \u201cHeart on My Sleeve\u201d was credited to Ghostwriter and heard millions of times on streaming services. Although Universal Music Group, which represents both artists, argued that AI companies violate copyright by using these artists\u2019 songs in training data, legal observers say the song was original even if it was imitative. They also claim that <a target=\"_blank\" href=\"https:\/\/variety.com\/2023\/music\/news\/ai-generated-drake-the-weeknd-song-submitted-for-grammys-1235714805\/\" >Ghostwriter<\/a> wasn\u2019t infringing on any existing work whose rights belonged to Drake, The Weeknd, and Universal. By the time Universal sent take-down notices, third parties had copied and uploaded the song.<\/p>\n<p>Copyright does not protect an artist\u2019s voice, style, or flow. However, infringement may occur if a song is similar enough to an earlier work in style and \u201cfeel,\u201d an <a target=\"_blank\" href=\"https:\/\/wjlta.com\/2024\/01\/24\/time-to-face-the-music-a-i-music-copyright-infringement-battle-makes-it-to-court\/\" >ambiguous determination<\/a> that courts are frequently called upon to adjudicate.<\/p>\n<p><a target=\"_blank\" href=\"https:\/\/www.businessinsider.com\/drake-ai-music-clone-easy-tiktok-2023-5\" >Jered Chavez<\/a> has also been steadily making AI-generated music clips, producing a cappella versions of songs trained to sound like the most recognizable musicians in the world. These clips have proven remarkably popular on TikTok and are cheap and simple to make.<\/p>\n<p>Sting and other music artists have denounced the production of AI songs that use famous artists\u2019 vocals. In a May 2023 <a target=\"_blank\" href=\"https:\/\/www.bbc.com\/news\/entertainment-arts-65627089\" >interview<\/a> with <a target=\"_blank\" href=\"https:\/\/www.bbc.com\/news\/entertainment-arts-65627089\" >BBC<\/a> News, Sting criticized the use of AI in music, saying that it would require musicians to defend their \u201chuman capital against AI,\u201d declaring, \u201cThe building blocks of music belong to us, to human beings.\u201d<\/p>\n<p>\u201cIt\u2019s easy to use copyright as a cudgel in this kind of circumstance to go after new creative content that you feel like crosses some kind of line, even if you don\u2019t have a really strong legal basis for it, because of how strong the copyright system is,\u201d <a target=\"_blank\" href=\"https:\/\/www.theverge.com\/2023\/5\/1\/23703087\/ai-drake-the-weeknd-music-copyright-legal-battle-right-of-publicity\" >said<\/a> Nick Garcia, policy counsel at Public Knowledge, to the Verge.<\/p>\n<p>Another matter of concern is violating the artists\u2019 rights by using their voices to train AI programs. Yet, creators and publishers are armed with relevant laws to fight back. The <a target=\"_blank\" href=\"https:\/\/www.inta.org\/topics\/right-of-publicity\/\" >right of publicity<\/a> (sometimes called the \u201cright of privacy\u201d) can be invoked by a singer whose voice has been cloned. Still, this right is only on record in certain states\u2014notably <a target=\"_blank\" href=\"https:\/\/dos.ny.gov\/right-publicity\" >New York<\/a> and <a target=\"_blank\" href=\"https:\/\/www.dmlp.org\/legal-guide\/california-right-publicity-law\" >California<\/a>, where many major entertainment companies are located. Real Drake and The Weeknd could sue Ghostwriter using the same law that Wheel of Fortune\u2019s longtime co-host Vanna White relied on to <a target=\"_blank\" href=\"https:\/\/law.justia.com\/cases\/federal\/appellate-courts\/F2\/971\/1395\/71823\/\" >sue a metallic android<\/a><a target=\"_blank\" href=\"https:\/\/law.justia.com\/cases\/federal\/appellate-courts\/F2\/971\/1395\/71823\/\" > lookalike used in a Samsung advertisement<\/a> in 1992, <a target=\"_blank\" href=\"https:\/\/www.theverge.com\/2023\/5\/1\/23703087\/ai-drake-the-weeknd-music-copyright-legal-battle-right-of-publicity\" >pointed out<\/a> in the Verge article.<\/p>\n<p>These right of publicity laws <a target=\"_blank\" href=\"https:\/\/www.bloomberglaw.com\/external\/document\/X42ALVPC000000\/trademarks-professional-perspective-trademarks-and-the-right-of-\" >protect<\/a> against unauthorized commercial uses of a person\u2019s name, likeness, and persona while protecting individuals\u2019 exclusive rights to profit from their identities.<\/p>\n<p><strong>\u2018If You Can\u2019t Beat \u2019Em, Join \u2019Em\u2019<\/strong><\/p>\n<p>The singer Grimes has taken a different approach to AI by <a target=\"_blank\" href=\"https:\/\/www.npr.org\/2023\/04\/24\/1171738670\/grimes-ai-songs-voice\" >allowing<\/a> her fans to create and distribute songs using an AI-produced version of the artist\u2019s voice without legal penalty.<\/p>\n<p>However, she isn\u2019t giving up all rights since the invitation requires fans to use a customized \u201cGrimesAI voiceprint\u201d using a software program called <a target=\"_blank\" href=\"https:\/\/elf.tech\/connect\" >Elf.Tech<\/a>. While they can use the program to produce original songs, they still need to credit the singer as the main or featured artist.<\/p>\n<p>Anyone who uses her voiceprint will also have to <a target=\"_blank\" href=\"https:\/\/www.prweb.com\/releases\/tunecore-partners-with-createsafe-using-grimes-elf-tech-to-facilitate-collaboration-between-ai-and-self-releasing-artists-821205836.html\" >split the royalties with her on a 50\/50 basis<\/a>, and Grimes will have to approve the \u201ccollaboration.\u201d Grimes further <a target=\"_blank\" href=\"https:\/\/www.musicbusinessworldwide.com\/tunecore-partners-with-grimes-to-distribute-her-ai-collaborations\" >stipulates<\/a> that she \u201cdoes not claim any ownership of the sound recording or the underlying composition\u201d unless the composition originated with Grimes. Fans should feel free to use her voice \u201c<a target=\"_blank\" href=\"https:\/\/twitter.com\/Grimezsz\/status\/1650304051718791170?lang=en\" >without penalty<\/a>,\u201d and added that she <a target=\"_blank\" href=\"https:\/\/twitter.com\/Grimezsz\/status\/1650304205981089793\" >liked the idea<\/a> of \u201copen-sourcing all art and killing copyright.\u201d<\/p>\n<p><strong>The New York Times and Other Publications Sue<\/strong><\/p>\n<p>In December 2023, the New York Times <a target=\"_blank\" href=\"https:\/\/apnews.com\/article\/openai-new-york-times-chatgpt-lawsuit-grisham-nyt-69f78c404ace42c0070fdfb9dd4caeb7\" >sued<\/a> the tech companies OpenAI and Microsoft for copyright infringement. It was the first such challenge by a major American news organization. The Times <a target=\"_blank\" href=\"https:\/\/www.theverge.com\/2023\/12\/27\/24016212\/new-york-times-openai-microsoft-lawsuit-copyright-infringement\" >contends<\/a> that OpenAI\u2019s ChatGPT and Microsoft\u2019s Copilot can produce content nearly identical to the Times articles, giving them a \u201cfree ride on its massive investment in journalism to build substitutive products without permission or payment.\u201d <a target=\"_blank\" href=\"https:\/\/apnews.com\/article\/openai-new-york-times-chatgpt-lawsuit-grisham-nyt-69f78c404ace42c0070fdfb9dd4caeb7\" >NYT claims<\/a> that Microsoft\u2019s search engine Copilot, which uses OpenAI\u2019s ChatGPT, provided results that substantially copied \u201cverbatim\u201d from the paper\u2019s Wirecutter content.<\/p>\n<p><a target=\"_blank\" href=\"https:\/\/openai.com\/index\/openai-and-journalism\/\" >OpenAI disputed<\/a> these claims: \u201cWe support journalism, partner with news organizations, and believe the New York Times lawsuit is without merit.\u201d NYT admitted in its suit that it had been in <a target=\"_blank\" href=\"https:\/\/www.theverge.com\/2023\/12\/27\/24016212\/new-york-times-openai-microsoft-lawsuit-copyright-infringement\" >talks<\/a> with Microsoft and OpenAI about terms for resolving the dispute \u201cbut failed to reach a solution,\u201d <a target=\"_blank\" href=\"https:\/\/www.theverge.com\/2023\/12\/27\/24016212\/new-york-times-openai-microsoft-lawsuit-copyright-infringeme\" >according<\/a> to a December 2023 article in the Verge.<\/p>\n<p>In April 2024, eight daily newspapers (including the New York Daily News, Chicago Tribune, and Denver Post) owned by Alden Global Capital followed the Times\u2019 example. They <a target=\"_blank\" href=\"https:\/\/www.axios.com\/2024\/04\/30\/microsoft-openai-lawsuit-copyright-newspapers-alden-global\" >sued OpenAI and Microsoft<\/a>, alleging that the tech companies used millions of copyrighted articles without permission to train their generative AI products.<\/p>\n<p>Alden\u2019s suit also cited errors by OpenAI\u2019s ChatGPT in response to user prompts and accused them of \u201creputational damage.\u201d One OpenAI response stated that the Chicago Tribune had recommended an infant lounger, which was not the case. Moreover, the product had been recalled because it was linked to newborn deaths. In another example, the AI \u201cmade-up answers\u201d falsely said that \u201cresearch\u201d published in the Denver Post stated that smoking could \u201ccure\u201d asthma, according to the new website Axios. An OpenAI spokeswoman <a target=\"_blank\" href=\"https:\/\/www.nytimes.com\/2024\/04\/30\/business\/media\/newspapers-sued-microsoft-openai.html\" >claimed<\/a> the company \u201cwas not previously aware of Alden\u2019s concerns.\u201d<\/p>\n<p>The suit comes as other major media companies, such as the <a target=\"_blank\" href=\"https:\/\/apnews.com\/article\/openai-chatgpt-associated-press-ap-f86f84c5bcc2f3b98074b38521f5f75a\" >Associated Press<\/a> and <a target=\"_blank\" href=\"https:\/\/techcrunch.com\/2023\/12\/13\/openai-inks-deal-with-axel-springer-on-licensing-news-for-model-training\/\" >Axel Springer<\/a>, the German owner of outlets like Politico and Business Insider, have reached data licensing agreements with OpenAI.<\/p>\n<p>OpenAI has also conducted <a target=\"_blank\" href=\"https:\/\/www.bloomberg.com\/news\/articles\/2024-01-10\/openai-in-talks-with-cnn-fox-and-time-to-license-content\" >discussions<\/a> with the News\/Media Alliance, a journalism trade group representing more than 2,200 media outlets worldwide, \u201cto explore opportunities, discuss their concerns, and provide solutions.\u201d In addition, the AI company has also been in conversations with Gannett, CNN, and IAC, an internet media company.<\/p>\n<p>Some companies have realized that it\u2019s better to collaborate with AI companies than to fight them. In May 2024, <a target=\"_blank\" href=\"https:\/\/openai.com\/index\/news-corp-and-openai-sign-landmark-multi-year-global-partnership\/\" >News Corp and OpenAI<\/a> announced a multiyear agreement to bring the news media\u2019s content to OpenAI. That gives the software company access to \u201ccurrent and archived content\u201d from the Wall Street Journal, Barron\u2019s, MarketWatch, New York Post, the Times and the Sunday Times, and the Sun (UK), as well as such Australian newspapers as the Daily Telegraph, the Courier Mail, the Advertiser and, the Herald Sun.<\/p>\n<p>In May 2024, Atlantic Magazine and Vox Media (which includes Vox, the Verge, Eater, the Cut, and Vulture) reached an agreement with OpenAI that allows the software company to use its archived content to train its AI models. \u201cBoth agreements also allow OpenAI to tap into the respective publishers\u2019 current content to fuel responses to user queries in OpenAI products, including ChatGPT,\u201d <a target=\"_blank\" href=\"https:\/\/www.axios.com\/2024\/05\/29\/atlantic-vox-media-openai-licensing-deal\" >wrote<\/a> Axios senior media reporter Sara Fischer.<\/p>\n<p>Not everyone involved was pleased with this arrangement. In the Atlantic, writer Damon Beres called the multiyear agreement a \u201c<a target=\"_blank\" href=\"https:\/\/www.theatlantic.com\/technology\/archive\/2024\/05\/a-devils-bargain-with-openai\/678537\/\" >Devil\u2019s Bargain<\/a>,\u201d pointing out that the technology has \u201cnot exactly felt like a friend to the news industry.\u201d However, Beres conceded that \u201cgenerative AI could turn out to be fine\u201d but that it would take time to find out.<\/p>\n<p>Predictably, compensation is a crucial issue. OpenAI has reportedly offered <a target=\"_blank\" href=\"https:\/\/www.theverge.com\/2024\/1\/4\/24025409\/openai-training-data-lowball-nyt-ai-copyright\" >between $1 and $5<\/a> million annually to license copyrighted articles, although for some top publishers, the amount OpenAI has proposed is too low.<\/p>\n<p>Marc Benioff, Salesforce Inc.\u2019s chief executive officer and owner of Time magazine, asserted that AI companies have been ripping off \u201cintellectual property to build their technology.\u201d \u201cAll the training data has been stolen,\u201d he <a target=\"_blank\" href=\"https:\/\/telecom.economictimes.indiatimes.com\/news\/internet\/openai-ceo-sam-altman-salesforces-mark-benioff-disagree-on-ais-use-of-copyrighted-content\/106943213\" >said<\/a> at the World Economic Forum in Davos in January 2024.<\/p>\n<p>Benioff said, \u201cNobody really exactly knows\u201d what an equitable compensation for their data would be but suggested that \u201cAI companies should standardize payments to treat content creators fairly.\u201d Despite his concerns, Benioff\u2019s Time is among publications <a target=\"_blank\" href=\"https:\/\/www.reuters.com\/technology\/openai-content-licensing-talks-with-cnn-fox-time-bloomberg-news-2024-01-11\/\" >negotiating with OpenAI<\/a> to license their work.<\/p>\n<p>In February 2024, <a target=\"_blank\" href=\"https:\/\/www.nytimes.com\/2024\/02\/28\/technology\/openai-copyright-suit-media.html\" >three online media companies<\/a>\u2014Raw Story, Alternet, and the Intercept\u2014sued OpenAI, claiming that the company had trained its chatbot using copyrighted works without proper attribution. The three companies sought $2,500 per violation and asked OpenAI to remove all copyrighted articles in its data training sets. The Intercept also sued Microsoft, an OpenAI partner that created its own chatbot using the same articles.<\/p>\n<p>\u201cIt is time that news organizations fight back against Big Tech\u2019s continued attempts to monetize other people\u2019s work,\u201d said John Byrne, the chief executive and founder of Raw Story, which owns Alternet, according to an <a target=\"_blank\" href=\"https:\/\/www.nytimes.com\/2024\/02\/28\/technology\/openai-copyright-suit-media.html\" >article<\/a> in the New York Times. \u201cBig Tech has decimated journalism. It\u2019s time that publishers take a stand.\u201d<\/p>\n<p><strong>The SAG\/AFTRA Strike: Why AI Matters to Screen, Television, and Streamer Actors<\/strong><\/p>\n<p>The use of AI was one of the major points of contention for the labor union, the Screen Actors Guild-American Federation of Television and Radio Actors (SAG-AFTRA), which went on strike from July to November 2023. The screen actors\u2019 strike overlapped for several months with the screenwriters\u2019 walkout. For the Writers Guild of America (WGA), as the screenwriters guild is known, AI was also one of the <a target=\"_blank\" href=\"https:\/\/www.wga.org\/uploadedfiles\/news_and_events\/public_policy\/WGA_Comment_on_USCO_Artificial_Intelligence_and_Copyright.pdf\" >outstanding issues<\/a> in negotiating a new contract with the studios.<\/p>\n<p><a target=\"_blank\" href=\"https:\/\/www.sagaftra.org\/sag-aftra-statement-use-artificial-intelligence-and-digital-doubles-media-and-entertainment\" >SAG-AFTRA\u2019s March 2023 statement<\/a> left no room for ambiguity: \u201cHuman creators are the foundation of the creative industries, and we must ensure that they are respected and paid for their work. Governments should not create new copyright or other IP exemptions that allow AI developers to exploit creative works, or professional voices and likenesses, without permission or compensation. Trustworthiness and transparency are essential to the success of AI.\u201d<\/p>\n<p>SAG-AFTRA\u2019s executive director Duncan Crabtree-Ireland \u201ccalled out the \u2018double standard\u2019 in the relationship between actors and corporations when it comes to copyright infringement,\u201d <a target=\"_blank\" href=\"https:\/\/www.theregister.com\/Author\/Katyanna-Quach\" >wrote<\/a> Katyannah Quach in an October 2023 article in the Register. Why was it permissible for businesses to use AI to generate material as they wish, he asked, but if a person were to use a business\u2019s intellectual property, it becomes a problem?<\/p>\n<p>\u201cAfter all, if an individual decided to infringe on one of these companies\u02bc copyright protected content and distribute it without paying for the licensing rights, that individual would face a great deal of financial and legal ramifications,\u201d Crabtree-Ireland <a target=\"_blank\" href=\"https:\/\/kvgo.com\/ftc\/Creative-Economy-and-Generative-AI-October-4-2023\" >said<\/a> at a conference titled \u201cCreative Economy and Generative AI.\u201d \u201cSo why is the reverse not true? Shouldn\u2019t the individuals whose intellectual property was used to train the AI algorithm be at least equally protected?\u201d<\/p>\n<p>Actors feared corporations could consistently exploit their likenesses for free once the actors were scanned. Tom Hanks has already <a target=\"_blank\" href=\"https:\/\/www.theregister.com\/2023\/10\/02\/tom_hanks_ai_advert\/\" >denounced<\/a> using his likeness for commercial purposes: \u201cThere\u2019s a video out there promoting some dental plan with an AI version of me. I have nothing to do with it.\u201d The daughter of actor Robin Williams has <a target=\"_blank\" href=\"https:\/\/brightside.me\/articles\/robin-williams-daughter-speaks-out-against-ai-recreating-her-dad-815948\/\" >issued a statement<\/a> finding it \u201cdisturbing\u201d that her father\u2019s voice was being replicated in AI tests.<\/p>\n<p>Actress Scarlett Johansson also found that her voice and likeness were used in a 22-second online ad on X. Her attorney <a target=\"_blank\" href=\"https:\/\/variety.com\/2023\/digital\/news\/scarlett-johansson-legal-action-ai-app-ad-likeness-1235773489\/\" >filed a suit<\/a>. Taylor Swift\u2019s face and voice were <a target=\"_blank\" href=\"https:\/\/www.today.com\/food\/news\/taylor-swift-le-creuset-cookware-giveaway-fake-rcna133325\" >featured<\/a> in advertisements for Le Creuset cookware. In the ads, the singer\u2019s clone addressed her fans as \u201cSwifties\u201d and said she was \u201cthrilled to be handing out free cookware sets,\u201d <a target=\"_blank\" href=\"https:\/\/www.nytimes.com\/2024\/01\/09\/technology\/taylor-swift-le-creuset-ai-deepfake.html\" >stated<\/a> a New York Times article. While Swift reportedly likes Le Creuset products, she never appeared in one of their ads.<\/p>\n<p>Johansson was in the <a target=\"_blank\" href=\"https:\/\/www.pressrundown.com\/business\/openai-pauses-chatgpt-voice-due-to-resemblance-to-scarlett-johansson\" >news<\/a> again in May 2024 when she <a target=\"_blank\" href=\"https:\/\/www.nytimes.com\/2024\/05\/20\/technology\/scarlett-johannson-openai-voice.html\" >alleged<\/a> that OpenAI was <a target=\"_blank\" href=\"https:\/\/www.pressrundown.com\/business\/openai-pauses-chatgpt-voice-due-to-resemblance-to-scarlett-johansson\" >using<\/a> <a target=\"_blank\" href=\"https:\/\/www.pressrundown.com\/business\/openai-pauses-chatgpt-voice-due-to-resemblance-to-scarlett-johansson\" >her voice <\/a>for its conversational ChatGPT called Sky. (Sky was one of five voice assistants OpenAI introduced.) Sam Altman, OpenAI\u2019s CEO, asserted that the voice wasn\u2019t Johansson\u2019s but the voice of another actress whose identity he declined to disclose. He had, however, approached Johansson initially, based on his expressed admiration for the 2013 film \u201cHer,\u201d for which \u201cshe provided the voice for an AI system.\u201d<\/p>\n<p>In response to Johansson\u2019s complaint, Altman announced that he was suspending the use of Sky\u2019s voice. \u201cOut of respect for Ms. Johansson, we have paused using Sky\u2019s voice in our products,\u201d Altman <a target=\"_blank\" href=\"https:\/\/www.npr.org\/2024\/05\/20\/1252495087\/openai-pulls-ai-voice-that-was-compared-to-scarlett-johansson-in-the-movie-her\" >said<\/a> in a statement to NPR. \u201cWe are sorry to Ms. Johansson that we didn\u2019t communicate better.\u201d The actress wasn\u2019t appeased. \u201cWhen I heard the release demo, I was shocked, angered, and in disbelief that Mr. Altman would pursue a voice that sounded so eerily similar to mine that my closest friends and news outlets could not tell the difference,\u201d she <a target=\"_blank\" href=\"https:\/\/x.com\/BobbyAllyn\/status\/1792679435701014908\" >said<\/a>.<\/p>\n<p>Numerous other AI celebrity endorsements, such as an AI clone of country singer Luke Combs <a target=\"_blank\" href=\"https:\/\/www.rollingstone.com\/music\/music-features\/weight-loss-gummies-ads-scam-ai-luke-combs-lainey-wilson-1234782413\/\" >promoting<\/a> weight loss gummies, have popped up. AI versions of the journalist <a target=\"_blank\" href=\"https:\/\/www.instagram.com\/p\/Cx54IIFuozI\/?utm_source=ig_embed&amp;ig_rid=bcc2f57d-b3cf-456f-a0a4-3088b0cbefa8&amp;img_index=1\" >Gayle King<\/a> and the YouTube influencer Jimmy Donaldson (\u201c<a target=\"_blank\" href=\"https:\/\/twitter.com\/MrBeast\/status\/1709046466629554577?ref_src\" >MrBeast<\/a>\u201d) have also manifested in ads without their permission.<\/p>\n<p>In November 2023, <a target=\"_blank\" href=\"https:\/\/prismreports.org\/2023\/12\/05\/sag-aftra-contract-falls-short-ai-protections\/\" >SAG signed a deal<\/a> that allowed for the use of the digital replication of members\u2019 voices for video games and other forms of entertainment if the companies secured consent <a target=\"_blank\" href=\"https:\/\/variety.com\/2024\/tv\/news\/sag-aftra-tv-animation-contracts-artificial-intelligence-1235950245\/\" >and guaranteed minimum payments<\/a>. The agreement will be a \u201cbig benefit to talent and a big benefit to studios,\u201d <a target=\"_blank\" href=\"https:\/\/www.business-standard.com\/world-news\/hollywood-actors-union-signs-first-big-deal-for-ai-in-voice-over-work-124011001079_1.html\" >said<\/a> Shreyas Nivas, co-founder and chief executive officer of Replica, a voice AI technology company, adding that it would \u201c[provide] a framework for use of AI in the production of video games,\u201d according to Business Standard.<\/p>\n<p><strong>Video Games Actors Strike<\/strong><\/p>\n<p>Video game performers walked off the job in July 2024 after contract negotiations between the union and the entertainment industry collapsed. <a target=\"_blank\" href=\"https:\/\/apnews.com\/article\/sagaftra-video-game-performers-ai-strike-4f4c7d846040c24553dbc2604e5b6034\" >Negotiations<\/a> with gaming companies, including divisions of Activision, Warner Brothers, Electronic Arts, Insomniac Games, and Walt Disney Co., over a new interactive media agreement had been ongoing for two years. The industry accounts for more than $100 billion in profit annually, according to game market forecaster <a target=\"_blank\" href=\"https:\/\/newzoo.com\/resources\/blog\/last-looks-the-global-games-market-in-2023\" >Newzoo<\/a>. While the union is part of SAG-AFTRA, it has a different contract than the one covering TV and film actors.<\/p>\n<p>As in the case of the SAG strike, AI was at the forefront of the dispute. The union believes its members are harmed if their likenesses are used to train AI to replicate an actor\u2019s voice or create a digital replica <a target=\"_blank\" href=\"https:\/\/apnews.com\/article\/aigenerated-voice-clones-video-game-actors-replica-studios-sagaftra-517cc248f60a2f5e35f9b239b70f20a7\" >without consent or fair compensation<\/a>. \u201cThe industry has told us point-blank that they do not necessarily consider everyone who is rendering movement performance to be a performer that is covered by the collective bargaining agreement,\u201d <a target=\"_blank\" href=\"https:\/\/www.cbsnews.com\/news\/video-game-actors-strike-ai-sag-actors-union\/\" >said<\/a> Ray Rodriguez, chief contracts officer for SAG-AFTRA.<\/p>\n<p>The industry negotiators, meanwhile, have been unable to find common ground with the union\u2019s stance. \u201cWe have already found common ground on 24 out of 25 proposals, including historic wage increases and additional safety provisions,\u201d <a target=\"_blank\" href=\"https:\/\/www.wired.com\/story\/video-game-voice-actors-are-going-on-strike-over-ai\/?source=Email_0_EDT_WIR_NEWSLETTER_0_DAILY_ZZ&amp;utm_source=nl&amp;utm_brand=wired&amp;utm_mailing=WIR_Daily_072624&amp;utm_campaign=aud-dev&amp;utm_medium=email&amp;utm_content=WIR_Daily_072624&amp;bxid=5be9f4663f92a404692f5fba&amp;cndid=26282935&amp;hasha=f9863c1a66b7d467fe7ef67ebff76b37&amp;hashb=f6877553839b3dc1fbd44403a2770a2df97af336&amp;hashc=f373637b64e78ec70a6b19fb56aa496ebb0cbab4c5316dcbd2181f783d8ee6bb&amp;esrc=OIDC_SELECT_ACCOUNT_PAGE&amp;utm_term=WIR_Daily_Active\" >said <\/a>Audrey Cooling, a spokesperson for the video games companies in the negotiations. \u201cOur offer is directly responsive to SAG-AFTRA\u2019s concerns and extends meaningful AI protections that include requiring consent and fair compensation to all performers working under the IMA [Interactive Media Agreement]. These terms are among the strongest in the entertainment industry.\u201d<\/p>\n<p><strong>WGA Strike: Why Screenwriters Fear AI<\/strong><\/p>\n<p>When the screenwriters\u2014who work on film scripts and TV programs (including late-night shows)\u2014struck in early 2023, they also demanded that their work\u2019s rights be protected from being used to train AI software and write or rewrite scripts. Using AI for these purposes could theoretically save the studios a lot of money\u2014and potentially put a lot of writers out of work.<\/p>\n<p>In their <a target=\"_blank\" href=\"https:\/\/www.akingump.com\/en\/insights\/alerts\/ai-concerns-of-wga-and-sag-aftra-what-is-allowed\" >statement<\/a>, the Writers Guild of America declared that \u201cGAI (generative artificial intelligence) cannot be a \u2018writer\u2019 or \u2018professional writer\u2019 as defined in the MBA [minimum basic agreement] because it is not a person, and therefore materials produced by GAI should not be considered literary material under any MBA.\u201d<\/p>\n<p>The WGA held that AI is allowed in some instances, such as when the employer discloses that AI wrote the material or when the writer uses AI in preparing their screenplay or teleplay with the company\u2019s consent.<\/p>\n<p>When the <a target=\"_blank\" href=\"https:\/\/www.theverge.com\/2023\/9\/26\/23891835\/wga-contract-summary-ai-streaming-data\" >contract was agreed upon<\/a>, and the strike ended in September 2023, the guild received much of what it wanted regarding salary increases and AI.<\/p>\n<p>The studios agreed that AI-generated content couldn\u2019t be used to generate source material, meaning that a studio executive couldn\u2019t ask writers to create a story using ChatGPT and then ask them to turn it into a script (with the executive claiming rights to the original story). The WGA <a target=\"_blank\" href=\"https:\/\/www.theverge.com\/2023\/9\/26\/23891835\/wga-contract-summary-ai-streaming-data\" >also<\/a> \u201creserves the right to assert that exploitation of writers\u2019 material to train AI is prohibited by MBA or other law.\u201d<\/p>\n<p><strong>Film Directors Accept AI<\/strong><\/p>\n<p>In marked contrast to SAG-AFTRA and the WGA, which went out on strike in 2023 to secure better terms in their contracts, the Directors Guild of America (DGA) quickly <a target=\"_blank\" href=\"https:\/\/deadline.com\/2024\/01\/dga-revisions-amptp-contract-streaming-bonus-1235805069\/\" >agreed to a new contract<\/a>. However, film and TV directors share the same situation as writers and actors. They are hired for each work they direct.<\/p>\n<p>Under U.S. copyright law, they are considered employers. At the same time, producers are the owners of any copyright (<a target=\"_blank\" href=\"https:\/\/www.hollywoodreporter.com\/business\/business-news\/ai-copyright-law-studios-tech-actors-writers-1235638242\/\" >more rights accrue<\/a> to directors in other countries, including the United Kingdom, France, and Italy). Rights are allocated as a result of union contracts with studios. However, the absence of laws recognizing creators\u2019 rights to their creations is alarming because of the advent of generative AI tools, which studios may exploit.<\/p>\n<p>In a <a target=\"_blank\" href=\"https:\/\/www.hollywoodreporter.com\/dailyedition\/08-11-2023\/1235640124\/\" >statement<\/a>, the DGA warned: \u201cThese third parties, who are not bound to our collective bargaining agreements, may ingest and regurgitate copyrighted films and television shows into AI systems without the participation of the copyright owner or the need to agree to the terms of our new agreement.\u201d<\/p>\n<p>In case the courts are unequipped to deal with this issue, the DGA and WGA have called for the \u201c<a target=\"_blank\" href=\"https:\/\/www.hollywoodreporter.com\/business\/business-news\/ai-copyright-law-studios-tech-actors-writers-1235638242\/\" >establishment of moral rights<\/a>\u201d that would recognize directors (and writers) as the original authors of their work, \u201c[giving] them larger financial and creative control over exploitation of their material even when they don\u2019t own the copyrights,\u201d <a target=\"_blank\" href=\"https:\/\/www.hollywoodreporter.com\/business\/business-news\/ai-copyright-law-studios-tech-actors-writers-1235638242\/\" >stated<\/a> the Hollywood Reporter.<\/p>\n<p><strong>Why the Studios Defend AI<\/strong><\/p>\n<p>The Movie Picture Association (MPA), AI companies like OpenAI and Meta, and tech advocacy groups see opportunities where the unions see a threat. The MPA and software companies differ on \u201cwhether new legislation is warranted to address the unauthorized use of copyrighted material to train AI systems and the mass generation of potentially infringing works based on existing content,\u201d <a target=\"_blank\" href=\"https:\/\/www.hollywoodreporter.com\/business\/business-news\/ai-copyright-law-studios-tech-actors-writers-1235638242\/\" >according<\/a> to the Hollywood Reporter article.<\/p>\n<p>The MPA, meanwhile, also declared that the question of fair use should be determined on a \u201c<a target=\"_blank\" href=\"https:\/\/www.hollywoodreporter.com\/business\/business-news\/ai-copyright-law-studios-tech-actors-writers-1235638242\/\" >case-by-case basis<\/a>.\u201d \u201cFor example, fine-tuning an AI model, specifically using the library of James Bond movies for the purpose of making a competing movie that appeals to the same audience, likely would weigh against fair use.\u201d<\/p>\n<p>Despite exceptions like the hypothetical new Bond movie, the MPA argued in favor of \u201clooser standards\u201d when copyrighting works created by AI. It maintained that the Copyright Office is \u201ctoo rigid\u201d by conferring intellectual property rights only on works created by humans <a target=\"_blank\" href=\"https:\/\/www.hollywoodreporter.com\/business\/business-news\/ai-copyright-law-studios-tech-actors-writers-1235638242\/\" >because<\/a> \u201cit does not take into account the human creativity that goes into creating a work using AI as a tool.\u201d<\/p>\n<p><strong>The Legal Future of AI<\/strong><\/p>\n<p>In 2023, two bills were introduced in Congress to address scams that use AI\u2014the <a target=\"_blank\" href=\"https:\/\/www.congress.gov\/bill\/116th-congress\/house-bill\/3230\" >DEEPF<\/a><a target=\"_blank\" href=\"https:\/\/www.congress.gov\/bill\/116th-congress\/house-bill\/3230\" >AKES <\/a><a target=\"_blank\" href=\"https:\/\/www.congress.gov\/bill\/116th-congress\/house-bill\/3230\" >Accountability Act<\/a> in the House and the <a target=\"_blank\" href=\"https:\/\/www.coons.senate.gov\/imo\/media\/doc\/no_fakes_act_one_pager.pdf\" >No Fakes Act<\/a> in the Senate. Both bills require guardrails such as content labels or permission to use someone\u2019s voice or image.<\/p>\n<p>Congress needs to do much more to update copyright protections related to AI. By mid-2024, Congress had yet to make significant progress in enacting legislation on this issue. According to the nonprofit <a target=\"_blank\" href=\"https:\/\/www.brennancenter.org\/our-work\/research-reports\/artificial-intelligence-legislation-tracker\" >Brennan Center<\/a><u> for Justice<\/u>, several bills introduced in the 118th Congress (2023-2024) focused on high-risk AI, required purveyors of these systems to assess the technology, imposed transparency requirements, created a new regulatory authority to oversee AI or designated the role to an existing agency, and offered some protections to consumers by taking liability measures. Despite sharply polarized divisions between Democrats and Republicans, there is bipartisan agreement that regulation of AI is needed.<\/p>\n<p>On January 10, 2024, at a <a target=\"_blank\" href=\"https:\/\/www.wired.com\/story\/congress-senate-tech-companies-pay-ai-training-data\/?bxid\" >Senate hearing on AI\u2019s impact on journalism<\/a>, Republican and Democratic lawmakers agreed that OpenAI and other AI companies should pay media organizations for using their content AI projects. \u201cIt\u2019s not only morally right,\u201d said Richard Blumenthal, the Democrat who chairs the Judiciary Subcommittee on Privacy, Technology, and the Law. \u201cIt\u2019s legally required,\u201d stated a November 2023 Wired article.<\/p>\n<p>Josh Hawley, a Republican, agreed. \u201cIt shouldn\u2019t be that just because the biggest companies in the world want to gobble up your data, they should be able to do it,\u201d he <a target=\"_blank\" href=\"https:\/\/www.wired.com\/story\/congress-senate-tech-companies-pay-ai-training-data\/?bxid=5be9f4663f92a404692f5fba&amp;cndid=26282935&amp;esrc=OIDC_SELECT_ACCOUNT_PAGE&amp;source=Email_0_EDT_WIR_NEWSLETTER_0_DAILY_ZZ&amp;utm_brand=wired&amp;utm_campaign=aud-dev&amp;utm_content=Wir_Daily_011024&amp;utm_mailing=Wir_Daily_011024&amp;utm_medium=email&amp;utm_source=nl&amp;utm_term=P6\" >said<\/a>.<\/p>\n<p>Media industry leaders have decried AI\u2019s uncompensated use of their content. Only one voice\u2014a journalism professor\u2014objected at the congressional hearing on the issue, insisting that data obtained without payment for training purposes was fair use. \u201cI must say that I am offended to see publishers lobby for protectionist legislation, trading on the political capital earned through journalism,\u201d <a target=\"_blank\" href=\"https:\/\/www.wired.com\/story\/congress-senate-tech-companies-pay-ai-training-data\/?bxid=5be9f4663f92a404692f5fba&amp;cndid=26282935&amp;esrc=OIDC_SELECT_ACCOUNT_PAGE&amp;source=Email_0_EDT_WIR_NEWSLETTER_0_DAILY_ZZ&amp;utm_brand=wired&amp;utm_campaign=aud-dev&amp;utm_content=Wir_Daily_011024&amp;utm_mailing=Wir_Daily_011024&amp;utm_medium=email&amp;utm_source=nl&amp;utm_term=P6\" >said<\/a> Jeff Jarvis, a professor at the Craig Newmark Graduate School of Journalism.<\/p>\n<p>However, experts on AI who were not at the hearing have yet to reach a consensus on the issue of compensation. \u201cWhat would that even look like?\u201d <a target=\"_blank\" href=\"https:\/\/news.cornell.edu\/stories\/2023\/05\/kreps-generative-ai-holds-promise-peril-democracies\" >ask<\/a><a target=\"_blank\" href=\"https:\/\/news.cornell.edu\/stories\/2023\/05\/kreps-generative-ai-holds-promise-peril-democracies\" >ed<\/a> Sarah Kreps, who directs the Tech Policy Institute at Cornell University. \u201cRequiring licensing data will be impractical, favor the big firms like OpenAI and Microsoft that have the resources to pay for these licenses, and create enormous costs for startup AI firms that could diversify the marketplace and guard against hegemonic domination and potential antitrust behavior of the big firms.\u201d<\/p>\n<p>There\u2019s some disagreement, even among those favoring some form of licensing for AI training data. Northwestern computational journalism professor Nick Diakopoulos <a target=\"_blank\" href=\"https:\/\/www.wired.com\/story\/congress-senate-tech-companies-pay-ai-training-data\/#:~:text=%E2%80%9CAs%20a%20high%2Dquality%20and,journalism%20professor%20Nick%20Diakopoulos%20says.\" >underscored<\/a> the ambiguity: \u201cAs a high-quality and up-to-date source of information, news media is a valuable source of data for AI companies. My opinion is that they should pay to license it and that it is in their interest to do so. But I do not think a mandatory licensing regime is tenable.\u201d<\/p>\n<p>If <a target=\"_blank\" href=\"https:\/\/www.hollywoodreporter.com\/business\/business-news\/ai-copyright-law-studios-tech-actors-writers-1235638242\/\" >Congress doesn\u2019t intervene<\/a>, it will fall to the courts to determine the legality of using copyrighted works in training datasets for AI companies. Is it fair use if the content produced is considered \u201ctransformative\u201d as it differs significantly from the original books or images used to train the software system?<\/p>\n<p>The fact that AI companies are training their systems for profit may sway the Supreme Court in another direction. Do AI companies need to pay for the training data that powers their generative AI systems? Several <a target=\"_blank\" href=\"https:\/\/www.wired.com\/story\/matthew-butterick-ai-copyright-lawsuits-openai-meta\/\" >lawsuits<\/a> against Meta, Alphabet, and OpenAI may offer an answer about whether training on copyrighted material constitutes infringement.<\/p>\n<p>\u201cIt seems everybody thinks that AI needs to be regulated,\u201d <a target=\"_blank\" href=\"https:\/\/www.guggenheim.org\/articles\/checklist\/the-artist-preserving-histories-with-ai?utm_medium=Email&amp;utm_source=SFMC&amp;utm_campaign=PP_LateShiftLG_011223\" >said<\/a> artist Stephanie Dinkins, an AI practitioner, during an interview with LG Electronics Associate Curator at Guggenheim Museum, Noam Segal. \u201cI think we need to be thinking about the idea of context and knowing what we\u2019re looking at versus just seeing some materialization of something that nobody understands and thinks exists but maybe doesn\u2019t. I think that we\u2019re so far behind [in] thinking about this in a real way\u2026 It still feels like now there are meetings happening, but we\u2019re dragging our feet. And it feels as if, at a governmental level, we don\u2019t quite understand what we\u2019re dealing with yet.\u201d<\/p>\n<p>Echoing Dinkins\u2019 view, Kevin Roose, tech correspondent for the New York Times, <a target=\"_blank\" href=\"https:\/\/www.nytimes.com\/2024\/01\/05\/podcasts\/nyt-lawsuit-openai-imessage-new-years-tech.html\" >said<\/a> in a Times podcast that new copyright laws for AI were unnecessary. \u201cBut\u2026 it feels bizarre that when we talk about these AI models, we\u2019re citing case law from 30, 40, 50 years ago. \u2026 [It] just feels a little bit like we don\u2019t quite have the legal and copyright frameworks that we would need because what\u2019s happening under the hood of these AI models is actually quite different from other kinds of technologies.\u201d<\/p>\n<p><strong>Impending Peril or Profound Revolution\u2014or Both?<\/strong><\/p>\n<p>Forget \u201c<a target=\"_blank\" href=\"https:\/\/www.merriam-webster.com\/dictionary\/doomscroll\" >doomscrolling<\/a>.\u201d It\u2019s not half as much fun as the dystopian revels. AI has inspired all sorts of catastrophic scenarios that, in the worst cases, may spell the end of civilization as we know it.<\/p>\n<p>By now, we all know the stories\u2014the deepfakes, including <a target=\"_blank\" href=\"https:\/\/www.theverge.com\/2024\/1\/25\/24050334\/x-twitter-taylor-swift-ai-fake-images-trending\" >pornographic images of Taylor Swift<\/a>, that were widely seen before being taken down, or more disturbingly, the <a target=\"_blank\" href=\"https:\/\/www.politico.com\/news\/2024\/05\/28\/ai-deepfake-nudes-schools-states-00160183\" >naked images of high school girls produced by AI<\/a>, or for that matter, the synthetic robocalls by AI mimicking the <a target=\"_blank\" href=\"https:\/\/www.cnn.com\/2024\/01\/22\/politics\/fake-joe-biden-robocall\/index.html?utm_term=1706787941577b9c89d70a6ef&amp;utm_source\" >voice of President Joe Biden<\/a> just before the 2024 New Hampshire primaries.<\/p>\n<p>And we\u2019re familiar enough with the hallucinations\u2014the seemingly authentic, even oracular, statements by AI that have no basis.<\/p>\n<p>And there are all those jobs that may soon be redundant because of AI\u2014accountants, reporters, data programmers, retailers, paralegals.\u2014In the 2023 Hulu series, \u201c<a target=\"_blank\" href=\"https:\/\/www.imdb.com\/title\/tt15227418\/mediaviewer\/rm24335361\/?ref_=tt_ov_i\" >A Murder at the End of the World<\/a>,\u201d the villain (spoiler alert!) turns out to be AI, echoing the plot of Robert Harris\u2019 2011 novel, <a target=\"_blank\" href=\"https:\/\/www.kirkusreviews.com\/book-reviews\/robert-harris\/fear-index\/\" ><em>The Fear Index<\/em><\/a>, published long before the advent of AI, in which a sinister computer program manipulates the financial markets.<\/p>\n<p>But while the machinery operating the malicious software can be destroyed in the Hulu streamer, the malevolent force in Harris\u2019s novel can\u2019t be unplugged or blown up because it can always make endless copies of itself.<\/p>\n<p>People fear AI networks because they can\u2019t predict what the technology can do. While we can feed it with images, music, and data galore, we\u2014users and programmers alike\u2014do not know what the result will be.<\/p>\n<p>AI may turn out to be as profound and revolutionary as the telephone, radio, television, desktop computers, and smartphones. But as with those inventions, which we tend to take for granted, AI may also become incorporated into the fabric of our lives to such a degree that its impact is blunted by its familiarity.<\/p>\n<p>Americans tend to fall in love with the \u201cnext big thing.\u201d Or, in the case of AI, the \u201ccurrent big thing.\u201d Yet another \u201cnext big thing\u201d will always emerge. Maybe it will be neural prosthetics\u2014implants inserted in the brain that will enhance our intelligence,\u00a0 ramp up our motor skills, improve memory, and allow us to read somebody else\u2019s thoughts.<\/p>\n<p>Such technological advances could give AI a whole new meaning. Then, as is the case now, alarmists will warn us of the looming perils and impending disasters of these new inventions. Congressional hearings are sure to follow. Ideas for guardrails will be considered and dismissed or neglected\u2014even if they are adopted.<\/p>\n<p>Only time will tell whether AI will improve our quality of life or threaten our livelihood and <a target=\"_blank\" href=\"https:\/\/www.nytimes.com\/2023\/05\/30\/technology\/ai-threat-warning.html\" >existence<\/a>.<\/p>\n<p>_____________________________________________<\/p>\n<p style=\"padding-left: 40px;\"><em>Leslie Alan Horvitz is an author and journalist specializing in science and is a contributor to the <\/em><a target=\"_blank\" href=\"https:\/\/observatory.wiki\/Leslie_Alan_Horvitz\" >Observatory<\/a><em>. His nonfiction books include <\/em><a target=\"_blank\" href=\"https:\/\/www.google.com\/books\/edition\/Eureka\/9j0xJjHWqa8C\" >Eureka: Scientific Breakthroughs That Changed the World<\/a>, <a target=\"_blank\" href=\"https:\/\/www.google.com\/books\/edition\/Understanding_Depression\/jZAyQwKRvogC\" >Understanding Depression<\/a><em> with Dr. Raymond DePaulo of Johns Hopkins University, and <\/em><a target=\"_blank\" href=\"https:\/\/www.google.com\/books\/edition\/Essential_Book_of_Weather_Lore\/47K3GAAACAAJ?hl=en\" >The Essential Book of Weather Lore<\/a><em>. His articles have been published by <\/em>Travel and Leisure, Scholastic, Washington Times, <em>and<\/em> Insight on the News<em>, among others. Horvitz has served on the board of <\/em><a target=\"_blank\" href=\"https:\/\/artomi.org\/\" >Art Omi<\/a><em> and is a member of <\/em><a target=\"_blank\" href=\"https:\/\/pen.org\/\" >PEN America<\/a><em>. He is based in New York City. Find him online at <a target=\"_blank\" href=\"https:\/\/tinyurl.com\/3v8fdh2k\" >lesliehorvitz.com<\/a><\/em><\/p>\n<p><em>This article was produced by <\/em><a target=\"_blank\" href=\"https:\/\/independentmediainstitute.org\/earth-food-life\/\" ><em>Earth | Food | Life<\/em><\/a><em>, a project of the <\/em>Independent Media Institute.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>20 Aug 2024 &#8211; If AI creates the content, who owns the work? Answering this complex question is crucial to understanding the legal and ethical implications of AI-generated content.<\/p>\n","protected":false},"author":4,"featured_media":244980,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[3078],"tags":[1733,3359,1923,642],"class_list":["post-271943","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-artificial-intelligence-ai","tag-artificial-intelligence-ai","tag-copyright","tag-legality","tag-literature"],"_links":{"self":[{"href":"https:\/\/www.transcend.org\/tms\/wp-json\/wp\/v2\/posts\/271943","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.transcend.org\/tms\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.transcend.org\/tms\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.transcend.org\/tms\/wp-json\/wp\/v2\/users\/4"}],"replies":[{"embeddable":true,"href":"https:\/\/www.transcend.org\/tms\/wp-json\/wp\/v2\/comments?post=271943"}],"version-history":[{"count":1,"href":"https:\/\/www.transcend.org\/tms\/wp-json\/wp\/v2\/posts\/271943\/revisions"}],"predecessor-version":[{"id":271945,"href":"https:\/\/www.transcend.org\/tms\/wp-json\/wp\/v2\/posts\/271943\/revisions\/271945"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.transcend.org\/tms\/wp-json\/wp\/v2\/media\/244980"}],"wp:attachment":[{"href":"https:\/\/www.transcend.org\/tms\/wp-json\/wp\/v2\/media?parent=271943"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.transcend.org\/tms\/wp-json\/wp\/v2\/categories?post=271943"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.transcend.org\/tms\/wp-json\/wp\/v2\/tags?post=271943"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}