Researchers Prove AI Art Generators Can Easily Copy Existing Images

Researchers Prove AI Art Generators Can Easily Copy Existing Images

An image of Anne Graham Lotz on the left and a generated image that is a direct copy of Lotz on the right.

The right image was generated by taking the training data annotation for the left image, Life in the Light with Ann Graham Lotz, and then typing it into the Stable Diffusion prompt. Image: Cornell University/Extracting Training Data from Diffusion Models

One of the main defenses of those bullish on AI art generators is that although the models are trained on existing images, everything they create is new. AI evangelists often Compare these systems to real artists. Creative people take inspiration from all those that came before them, so why can’t AI similarly recall previous work?

New research could put a damper on that argument and could even become a major sticking point for several ongoing lawsuits related to AI-generated content and copyright. Researchers from industry and academia found that the most popular and upcoming AI image generators can “remember” images from the data they were trained on. Rather than creating something entirely new, the AI ​​will simply reproduce an image given certain prompts. Some of these replicated images may be copyrighted. But even worse, modern generative AI models have the ability to store and reproduce sensitive information collected for use in an AI training set.

The study was conducted by researchers in both the tech industry – notably Google and DeepMind – and at universities like Berkeley and Princeton. The same crew worked on an earlier study that identified a similar problem with AI language models, specifically GPT2, the precursor to OpenAI’s wildly popular ChatGPT. Bringing the band back together, the researchers, led by Google Brain researcher Nicholas Carlini, found that both Google’s Imagen and the popular open-source stable Diffusion were capable of reproducing images, some with obvious implications for the copyrighted or licensed images.

The first image in this tweet was generated using the caption listed in the dataset from Stable Diffusion, the multi-terabyte scraped image database called LAION. The team fed the caption into the Stable Diffusion prompt, and the result was exactly the same image, albeit slightly distorted by digital noise. The process of finding these duplicate images was relatively easy. The team ran the same prompt multiple times, and after getting the same resulting image, the researchers manually checked if the image was in the training set.

G/O Media may receive a commission

A series of images above and below show images taken by an AI training set and the AI ​​itself.

The bottom images have been tracked to the top images taken directly from the AI’s training data. All of these images may have licenses or copyrights attached to them. Image: Cornell University/Extracting Training Data from Diffusion Models

Two of the paper’s researchers, Eric Wallace, a UC Berkeley graduate student, and Vikash Sehwag, a Princeton University graduate student, told Gizmodo in a Zoom interview that image duplications are rare. Her team tried about 300,000 different subtitles and found a memorization rate of only 0.03%. For models like Stable Diffusion, which have been working to deduplicate images in their training set, duplicated images have been even rarer, although in the end all diffusion models will have the same problem, more or less severely. The researchers found that Imagen was able to remember images that only existed once in the data set.

“The caveat here is that the model is meant to generalize, it’s meant to produce novel images, rather than spitting out a memorized version,” Sehwag said.

Their research showed that the likelihood of AI generating duplicate material increases as AI systems themselves grow larger and more sophisticated. A smaller model like Stable Diffusion just doesn’t have the same amount of storage space to store most of that training data. That could change significantly in the coming years.

“Maybe next year, whatever new model comes out that’s a lot bigger and a lot more powerful, then those types of memory risks would potentially be a lot bigger than they are now,” Wallace said.

Through an intricate process of destroying the training data with noise before removing the same distortion, diffusion-based machine learning models produce data – in this case, images – that resemble the ones they were trained on. Diffusion models were an evolution of generative adversarial networks or GAN-based machine learning.

Researchers found that GAN-based models don’t have the same image storage problem, but large companies are unlikely to go beyond diffusion unless an even more sophisticated machine learning model comes to market that is even more realistic, produces higher quality images.

Florian Tramèr, a computer science professor at ETH Zurich who was involved in the study, noted how many AI companies point out that users, in both free and paid versions, are licensed to share or even monetize the AI -generated content is granted. The AI ​​companies themselves also reserve some rights to these images. This could prove to be a problem if the AI ​​generates an image that exactly matches an existing copyright.

With a memory rate of just 0.03%, AI developers could look at this study and see that there isn’t much of a risk. Companies could work to deduplicate images in the training data, making them less likely to remember. Heck, they could even develop AI systems that detect if an image is a direct replication of an image in training data and flag it for deletion. However, it obscures the full privacy risk posed by Generative AI. Carlini and Tramèr also contributed to another recent paper that argued that even attempts to filter data still do not prevent training data from leaking through the model.

And, of course, there is a high risk that images that no one wants copied will appear on users’ screens. Wallace asked if, for example, a researcher wanted to generate a whole series of synthetic medical data from X-rays of people. What should happen if a diffusion-based AI stores and duplicates a person’s actual medical records?

“It’s pretty rare, so you might not notice it at first, and then you can actually put that dataset on the internet,” the UC Berkeley student said. “The goal of this work is to prevent possible mistakes that people might make.”