Thousands of authors urge AI companies to stop using work without permission
Giuseppe Cacace/AFP via Getty Images
Thousands of writers including Nora Roberts, Viet Thanh Nguyen, Michael Chabon and Margaret Atwood have signed a letter asking artificial intelligence companies like OpenAI and Meta to stop using their work without permission or compensation.
It’s the latest in a volley of counter-offensives the literary world has launched in recent weeks against AI. But protecting writers from the negative impacts of these technologies is not an easy proposition.
According to a forthcoming report from The Authors Guild, the median income for a full-time writer last year was $23,000. And writers’ incomes declined by 42% between 2009 and 2019.
The advent of text-based generative AI applications like GPT-4 and Bard, that scrape the Web for authors’ content without permission or compensation and then use it to produce new content in response to users’ prompts, is giving writers across the country even more cause for worry.
“There’s no urgent need for AI to write a novel,” said Alexander Chee, the bestselling author of novels like Edinburgh and The Queen of the Night. “The only people who might need that are the people who object to paying writers what they’re worth.”
Chee is among the nearly 8,000 authors who just signed a letter addressed to the leaders of six AI companies including OpenAI, Alphabet and Meta.
“It says it’s not fair to use our stuff in your AI without permission or payment,” said Mary Rasenberger, CEO of The Author’s Guild. The non-profit writers’ advocacy organization created the letter, and sent it out to the AI companies on Monday. “So please start compensating us and talking to us.”
Rasenberger said the guild is trying to get these companies to settle without suing them.
“Lawsuits are a tremendous amount of money,” Rasenberger said. “They take a really long time.”
But some literary figures are willing to fight the tech companies in court.
Authors including Sarah Silverman, Paul Tremblay and Mona Awad recently signed on as plaintiffs in class action lawsuits alleging Meta and/or OpenAI trained their AI programs on pirated copies of their works. The plaintiffs’ lawyers, Joseph Saveri and Matthew Butterick, couldn’t be reached in time for NPR’s deadline and the AI companies turned down requests for comment.
Gina Maccoby is a literary agent in New York. She says the legal actions are a necessary step towards getting writers a fair shake.
“It has to happen,” Maccoby said. “That’s the only way these things are settled.”
Maccoby said agents, including herself, are starting to talk to publishers about featuring language in writers’ contracts that prohibits unauthorized uses of AI as another way to protect their livelihoods, and those of their clients. (According to a recent Authors Guild survey about AI, while 90% of the writers who responded said that “they should be compensated for the use of their work in training AI,” 67% said they “were not sure whether their publishing contracts or platform terms of service include permissions or grant of rights to use their work for any AI-related purposes.”)
“What I hear from colleagues is that most publishers are amenable to restricting certain kinds of AI use,” Maccoby said, adding that she has yet to add such clauses to her own writers’ contracts. The Authors Guild updated its model contract in March to include language addressing the use of AI.
The major publishers NPR contacted for this story declined to comment.
Maccoby said even if authors’ contracts explicitly forbid AI companies from scraping and profiting from literary works, the rules are hard to enforce.
“How does one even know if a book is in a data set that was ingested by an AI program?” Maccoby said.
In addition to letters, lawsuits and contractual language, the publishing sector is further looking to safeguard authors’ futures by advocating for legislation around how generative AI can and cannot be used.
The Author’s Guild’s Rasenberger said her organization is actively lobbying for such bills. Meanwhile, many hearings have been held at various levels of government on AI-related topics lately, such as last week’s senate judiciary subcommittee hearing on AI and copyright.
“Right now there’s a lot of talking about it,” said Rumman Chowdhury, a Responsible AI Fellow at Harvard University, who gave testimony at one such hearing in June. “But we’re not seeing yet any concrete legislation or regulation coming out.”
Chowdhury said the way forward is bound to be messy.
“Some of it will be litigated, some of it will be regulated, and some of it people will literally just have to shout until we’re heard,” she said. “So right now, the best we can do is ask the AI companies ‘pretty, pretty please,’ and hopefully somebody will respond.”
Audio and digital stories edited by Meghan Collins Sullivan.
For all the latest Entertainment News Click Here
For the latest news and updates, follow us on Google News.