Anthropic Slammed With Lawsuit Over AI Writing

Harlem Book Fair

Anthropic is a company built around a language learning model (LLM) and chatbot called Claude. Recently, the company has been hit by a lawsuit in a federal court in San Francisco. In the suit, authors Andrea Bartz, Charles Graeber, and Kirk Wallace Johnson, allege that the company misused their books and hundreds of thousands of others to train Claude. They further claim that pirated versions of their works were used to teach Claude to respond to human prompts.

Anthropic And AI Lawsuits

Anthropic AI Writer Lawsuit
Photo by Florian Klauer on Unsplash

While it may be the most recent, Anthropic is not the first company to be sued for using artists‘ work to teach language learning models. OpenAI and Meta Platforms have both been sued for allegedly misusing authors’ works to train the LLM’s underlying language models. This recently filed case is the second to be brought against Anthropic and alleges the the company used pirated materials to improve its LLM.

It follows a suit brought against them last year by music publishers for similar reasons: alleged misuse of copyrighted song lyrics to teach Claude to respond to human prompts. The authors’ lawsuit seeks an unspecified amount of monetary damages in compensation and a court order permanently restricting Anthropic from accessing their works to improve their model in the future.

Anthropic’s Background And The Lawsuit

A group of ex-OpenAI leaders founded Anthropic. The company marketed itself as a more responsible and safety-focused LLM developer than its contemporaries (including OpenAI), asserting that its model can more naturally interact with people. In the lawsuit against Anthropic, the authors state that the theft of their materials to teach the Claude model made a mockery of this goal, as the company tapped into backlogs of pirated materials to help them build and enhance Claude. As the lawsuit puts it, the company “seeks to profit from strip-mining the human expression and ingenuity behind each one of those works.”

Final Thoughts

The case against Anthropic marks yet another instance of an AI-focused company facing scrutiny in the courtroom over accusations of using creative materials without the copyright holder’s consent to train their language models. These types of training methods raise many questions of legality and morality surrounding language learning models. While teaching these systems to respond to prompts a way more akin to how a human would isn’t inherently negative, the fact that many of these companies feel they can take the materials they find without permission to teach them is not ideal and should be addressed.

For More Great Content

Are you desiring top-tier content that covers everything? From thrilling sports and intoxicating entertainment news to gaming tips and professional betting advice, Total Apex covers it all. Delve into our no-fluff articles to stay ahead of the game with the latest sports action, uncover the hottest trends in entertainment, and get the latest scoops in the gaming industry that will take your experiences to the next level.

Finally, our betting advice will give you a decisive edge over the competition and increase your odds of beating the books. Whether you’re looking to stay updated or gain a competitive edge, Total Apex is your one-stop shop for all things compelling and relevant. Don’t forget we cover Fantasy Sports, too!

Check out all our sites: Total Apex Sports, Total Apex Fantasy Sports, Total Apex Entertainment, Total Apex Sports Bets, and Total Apex Gaming. Out of the ashes of obscurity will rise a beast. Always remember to Respect The Hustle! Follow us on Twitter/X @TotalApexSports to stay informed.

More Great Reads

Scroll to Top