AI art generator Stable Diffusion is facing another copyright lawsuit, this time by Getty

Stability AI, the inventors of the art generating tool Stable Diffusion, is being sued for alleged copyright infringement for stealing information to train its systems for the second time this week. Getty Images, the stock image/video/music supplier, has “launched legal procedures in the High Court of Justice in London” against Stability AI this time.

Stability AI illegally copied and processed millions of copyrighted photos “without a license to promote Stability AI’s economic interests and to the detriment of the content producers,” according to Getty Images.

Getty Images CEO Craig Peters told The Verge that the business has alerted Stability AI of its impending legal struggle in the United Kingdom. There has been no announcement on whether a US lawsuit will be filed.

“Because the company [Stability AI] has not contacted Getty Images to utilize our work or that of our contributors, we are taking actions to safeguard our intellectual property rights as well as those of our contributors,” Peters explained.

Learn how to run stable diffusion on your computer to make AI graphics.

The Stability AI attorneys appear to have a busy few months or years ahead of them. Three artists launched a class action complaint yesterday against Midjourney (another AI art generator) and portfolio site DeviantArt for allegedly infringing copyright rules. Creators are concerned about AI systems being trained on intellectual works without approval, acknowledgement, or pay, according to attorney Matthew Butterick, who brought the complaint with antitrust and class action expert Joseph Saveri Law Firm.

Concerns about which materials generative AIs will be educated on coexist with concerns that they may displace human occupations. It’s proving to be a legal minefield, with most system architects claiming that such training falls under the fair use theory in the United States or fair dealing in the United Kingdom. Getty Images, according to Peters, does not believe it is a true assumption, which is not unexpected.

An independent review of Stability AI’s dataset indicated that a considerable chunk originated from Getty Images and other stock picture sites, which may help Getty Images’ claims. Furthermore, the AI frequently recreates the Getty Photos watermark in its created images.

Getty Images, according to Peters, is not interested in monetary compensation or slowing the development of these AIs, but rather in developing a mechanism that protects intellectual property. The next version of Stable Diffusion, according to Stability AI, will allow artists to opt out of having their work included in training datasets, but it may not be enough to appease original creators and corporations like Getty Images.

The recent discovery by a California-based AI artist that private medical record images collected by her doctor in 2013 were part of the LAION-5B image collection adds to the dispute. The dataset, which consists of 5 billion photos and descriptive annotations generated by a nonprofit research group in Germany, will be used to train stable diffusion and other generative AIs. Have I Been Trained allows artists to see if their work is included in LAION-5B.

Leave a Reply

Your email address will not be published. Required fields are marked *