Thousands of Authors Urge AI Companies to Stop Using Their Works without Permission

ARTIFICIAL INTELLIGENCE-AI, 24 Jul 2023

Chloe Veltman | NPR - TRANSCEND Media Service

A visitor walks past shelves of books at the Mohammed bin Rashid Library in Dubai in June 2022. The library incorporates technology and artificial intelligence, including robots to help visitors and an electronic book retrieval system. It’s just one example of the many ways AI has entered the world of books.
Giuseppe Cacace/AFP via Getty Images

17 Jul 2023 – Thousands of writers including Nora Roberts, Viet Thanh Nguyen, Michael Chabon and Margaret Atwood have signed a letter asking artificial intelligence companies like OpenAI and Meta to stop using their work without permission or compensation.

It’s the latest in a volley of counter-offensives the literary world has launched in recent weeks against AI. But protecting writers from the negative impacts of these technologies is not an easy proposition.

According to a forthcoming report from The Authors Guild, the median income for a full-time writer last year was $23,000. And writers’ incomes declined by 42% between 2009 and 2019.

The advent of text-based generative AI applications like GPT-4 and Bard, that scrape the Web for authors’ content without permission or compensation and then use it to produce new content in response to users’ prompts, is giving writers across the country even more cause for worry.

“There’s no urgent need for AI to write a novel,” said Alexander Chee, the bestselling author of novels like Edinburgh and The Queen of the Night. “The only people who might need that are the people who object to paying writers what they’re worth.”

Chee is among the nearly 8,000 authors who just signed a letter addressed to the leaders of six AI companies including OpenAI, Alphabet and Meta.

“It says it’s not fair to use our stuff in your AI without permission or payment,” said Mary Rasenberger, CEO of The Author’s Guild. The non-profit writers’ advocacy organization created the letter, and sent it out to the AI companies on Monday. “So please start compensating us and talking to us.”

Rasenberger said the guild is trying to get these companies to settle without suing them.

“Lawsuits are a tremendous amount of money,” Rasenberger said. “They take a really long time.”

But some literary figures are willing to fight the tech companies in court.

Authors including Sarah Silverman, Paul Tremblay and Mona Awad recently signed on as plaintiffs in class action lawsuits alleging Meta and/or OpenAI trained their AI programs on pirated copies of their works. The plaintiffs’ lawyers, Joseph Saveri and Matthew Butterick, couldn’t be reached in time for NPR’s deadline and the AI companies turned down requests for comment.

Gina Maccoby is a literary agent in New York. She says the legal actions are a necessary step towards getting writers a fair shake.

“It has to happen,” Maccoby said. “That’s the only way these things are settled.”

Maccoby said agents, including herself, are starting to talk to publishers about featuring language in writers’ contracts that prohibits unauthorized uses of AI as another way to protect their livelihoods, and those of their clients. (According to a recent Authors Guild survey about AI, while 90% of the writers who responded said that “they should be compensated for the use of their work in training AI,” 67% said they “were not sure whether their publishing contracts or platform terms of service include permissions or grant of rights to use their work for any AI-related purposes.”)

“What I hear from colleagues is that most publishers are amenable to restricting certain kinds of AI use,” Maccoby said, adding that she has yet to add such clauses to her own writers’ contracts. The Authors Guild updated its model contract in March to include language addressing the use of AI.

The major publishers NPR contacted for this story declined to comment.

Maccoby said even if authors’ contracts explicitly forbid AI companies from scraping and profiting from literary works, the rules are hard to enforce.

Proceedings from the July 12 senate judiciary subcommittee hearing on AI. There have been many such hearings in recent months tackling various aspects of the technology.

YouTube

“How does one even know if a book is in a data set that was ingested by an AI program?” Maccoby said.

In addition to letters, lawsuits and contractual language, the publishing sector is further looking to safeguard authors’ futures by advocating for legislation around how generative AI can and cannot be used.

The Author’s Guild’s Rasenberger said her organization is actively lobbying for such bills. Meanwhile, many hearings have been held at various levels of government on AI-related topics lately, such as last week’s senate judiciary subcommittee hearing on AI and copyright.

“Right now there’s a lot of talking about it,” said Rumman Chowdhury, a Responsible AI Fellow at Harvard University, who gave testimony at one such hearing in June. “But we’re not seeing yet any concrete legislation or regulation coming out.”

Chowdhury said the way forward is bound to be messy.

“Some of it will be litigated, some of it will be regulated, and some of it people will literally just have to shout until we’re heard,” she said. “So right now, the best we can do is ask the AI companies ‘pretty, pretty please,’ and hopefully somebody will respond.”

________________________________________________

Chloe Veltman is a correspondent on NPR‘s Culture Desk. Before joining NPR in 2022, she was an arts and culture reporter and senior arts editor at KQED in San Francisco, and launched and led the arts and culture bureau at Colorado Public Radio in Denver.

 

Audio and digital stories edited by Meghan Collins Sullivan.

Go to Original – npr.org


Tags: , , , , , ,

Share this article:


DISCLAIMER: The statements, views and opinions expressed in pieces republished here are solely those of the authors and do not necessarily represent those of TMS. In accordance with title 17 U.S.C. section 107, this material is distributed without profit to those who have expressed a prior interest in receiving the included information for research and educational purposes. TMS has no affiliation whatsoever with the originator of this article nor is TMS endorsed or sponsored by the originator. “GO TO ORIGINAL” links are provided as a convenience to our readers and allow for verification of authenticity. However, as originating pages are often updated by their originating host sites, the versions posted may not match the versions our readers view when clicking the “GO TO ORIGINAL” links. This site contains copyrighted material the use of which has not always been specifically authorized by the copyright owner. We are making such material available in our efforts to advance understanding of environmental, political, human rights, economic, democracy, scientific, and social justice issues, etc. We believe this constitutes a ‘fair use’ of any such copyrighted material as provided for in section 107 of the US Copyright Law. In accordance with Title 17 U.S.C. Section 107, the material on this site is distributed without profit to those who have expressed a prior interest in receiving the included information for research and educational purposes. For more information go to: http://www.law.cornell.edu/uscode/17/107.shtml. If you wish to use copyrighted material from this site for purposes of your own that go beyond ‘fair use’, you must obtain permission from the copyright owner.

Comments are closed.