Anthropic goes for a settlement. Wants to avoid copyright precedent

partnership
source: Adobe Stock

The creator of the popular AI model Claude, Anthropic, is close to reaching a settlement in a landmark copyright class action lawsuit. The step allows the company to avoid a potentially costly and, more importantly, precedent-setting court battle that could affect the entire artificial intelligence industry.

The case is a focal point in the growing conflict between AI developers and content creators.

The dispute centres on the fundamental question of the legality of using copyrighted works to train commercial AI models without the consent and compensation of the authors.

Technology companies often argue that such activity is necessary for innovation and falls within fair use. Creators, on the other hand, see this as a massive theft of intellectual property that feeds a multi-billion dollar market.

The lawsuit against Anthropic gained prominence after US District Judge William Alsup certified it as a class action, potentially involving up to 7 million authors.

A key allegation that could weaken the company’s litigation position was the suggestion that at least some of the training data – including tens of thousands of books – came from pirated sources. This distinguishes the case from the hypothetical use of legally acquired copies and makes a fair use defence more difficult.

Although the details of the agreement have not yet been made public, the decision to settle is seen as a strategic move.

Anthropic, like competitors such as OpenAI and Meta who also face similar accusations, thus avoids the risk of a court ruling that would establish a binding legal doctrine for the entire industry.

A settlement, rather than a judgment, does not create hard law, but sends a clear financial and market signal. It indicates that the costs of a potential lawsuit and the risk of losing are high enough that AI companies are beginning to calculate the profitability of licensing data.

The final terms of the agreement will be closely watched by the industry as a whole, potentially becoming a template for future settlements and shaping new rules for responsible data acquisition for AI training.

Read more

cyberbezpieczeństwo

AI demokratyzuje cyberprzestępczość. Windows na celowniku hakerów

Sztuczna inteligencja, powszechnie uznawana za motor napędowy innowacji w biznesie, stała się równie potężnym narzędziem w rękach przestępców. Najnowszy Elastic 2025 Global Threat Report, oparty na analizie ponad miliarda punktów danych, rzuca światło na niepokojący trend: bariera wejścia do świata cyberprzestępczości drastycznie maleje, a zautomatyzowane ataki stają się nowym standardem

By Natalia Zębacka