Artists are suing Stability AI, Midjourney and DeviantArt


Summary

A class action is filed in the United States against Midjourney and Stability AI as well as the art platform DeviantArt.

American artists Sarah Andersen, Kelly McKernan and Karla Ortiz are filing a class action lawsuit in California against Stability AI (Stable Diffusion) and Midjourney. The artists seek damages and an injunction to prevent future damage.

The art platform DeviantArt is also accused of providing thousands or even millions of images from the LAION dataset for the formation of Stable Diffusion.

Instead of siding with the artists, DeviantArt uploaded DreamUp, an AI art app based on Stable Diffusion, according to the plaintiffs.

A d

AI code lawyer now also sues against AI images

Behind the lawsuit is programmer and attorney Matthew Butterick. He is filing another lawsuit against Microsoft, Github, and OpenAI, claiming that GitHub’s AI Copilot code reproduces snippets of developer code without attribution and violates open source license terms.

On Stable Diffusion and co, Butterick delivers a harsh verdict: “It is a parasite which, if allowed to proliferate, will cause irreparable harm to artists, now and in the future.”

Datasets without consent are the weak point of current image AI systems

In his memoir, Butterick points out the greatest weakness – from a legal perspective – of AI systems for images: almost no artist has given explicit consent for their works to be used to train an AI system. AI.

Even if the images generated by the systems pass for originals, the generating system would still be based on unauthorized data.

As Butterick puts it, “because all visual information in the system is derived from the copyrighted training images, the images produced, regardless of their outward appearance, are necessarily derivative works of those training images” .

Recommendation

Trustworthy AI through regulation?  Outline of the European approach
Trustworthy AI through regulation?  Outline of the European approach

A recent study examined the uniqueness of images generated by an AI diffusion model and showed that relatively exact copies of the original images occur regularly in the training dataset – in at least two out of 100 cases.

Stability AI founder Emad Mostaque discussed last November the possibility that future stable diffusion models could be trained on fully licensed datasets. Additionally, artists should have opt-out mechanisms for their image data.

Leave a Comment

Your email address will not be published. Required fields are marked *