Skip to Content

Glaze and Nightshade: Warriors against AI Art Theft

AI art theft is an evolving issue, and artists are turning to Nightshade and Glaze to protect art by exploiting the fundamental vulnerabilities of AI algorithms and models.
Glaze and Nightshade: Warriors against AI Art Theft

Introduction

Six fingers, a hand, disappearing limbs, and cartoonishly disproportionate figures. The initial stages of AI art were rough, to say the least.

AI generated imagery has become increasingly widespread, and companies like OpenAI, Midjourney, and DeviantArt are making constant improvements. However, they now face lawsuits from artists claiming that their art was used to train AI models without their consent. These protests gained traction, and a team at the University of Chicago soon started what would be known as The Glaze Project.

By collaborating with artists across the country, they developed a tool for creators to fight against AI art theft. Project leads Shawn Shan and Professor Ben Zhao launched two groundbreaking tools: Glaze, a defensive tool to prevent artists’ styles from being mimicked, and Nightshade, an offensive tool to corrupt AI models.

A generative AI takes in many images, each with a short caption, and uses this data to match images with phrases. For example, if shown enough images of whales, all with captions containing the word ‘whale,’ it will associate the visual similarities in each of those images with ‘whales.’ To corrupt AI data, images of whales under the faulty caption of ‘shark’ could be fed into the AI, and, with enough images, ‘sharks’ will eventually look like whales. However, this takes a lot of images, and some of those images may be filtered out for being too off the mark.

 

Nightshade

While Nightshade may sound destructive or nefarious, it actually fixes these problems and protects artists’ intellectual property. Nightshade maximizes the effect corrupted images have on AI models while minimizing the number of images needed to corrupt AI data, ensuring corrupted images will not be filtered out. 

Researchers found that images generated by the model itself had the most effect, as they represent the epitome of an AI’s knowledge on a certain subject. Using this information, the developers needed to figure out how to corrupt these images so that they look unaltered to the ordinary human.

Their answer lied in the use of perturbations–small changes in each pixel value. They developed the following formula to find the optimal number of perturbations:

The function F()  extracts features from images it is fed, with x_t being the image that will be ‘poisoned’ and x^a being the original image from the AI model. δ is the perturbation added to x_t. The resulting change is indiscernible to viewers but has drastic effects when fed into an AI model. The more images that are passed through Nightshade, the worse it gets. The following diagram shows how results vary from inputting 50 to 300 poisoned images:

Dogs become cats, cars become cows, and cubism becomes anime.

Glaze

The second tool is Glaze, which aims to make minimal changes to artworks to specifically combat style mimicry. When someone tries to mimic the art style of an artist who uses Glaze (though this art style cannot be one that is commonly trained on, like the style of Pablo Picasso or Van Gogh), the output will be wildly different. Glaze tries to create a disconnect between what is perceived by human eyes and AI algorithms to discourage targeted mimicry of artists while preserving the majority of an image’s appearance, similar to Nightshade.

This is achieved through the use of adversarial examples–small changes that cause AI models to misclassify inputs. This vulnerability has been identified for the past decade, but Glaze has taken advantage of these issues by making small additions of pixel data.

 

Limitations

Nonetheless, AI is constantly evolving, and Glaze and Nightshade are not “future-proof” and must be continuously updated to combat new AI algorithms. As a result, the credibility of these tools as a perfect solution to art theft is only temporary.

Several programs and papers have already tried to undo Glaze’s effects. One of note is IMPRESS, which evaluated the effectiveness of adversarial examples and suggested that large inconsistencies between original and “Glazed” artworks could allow for the purification of an image into its original, un-Glazed state.

These programs utilize a technique called “noisy upscaling,” where an image undergoes a significant blurring or pixelation effect, and is then upscaled. It removes the added data used to protect artwork from AI scrapers, but the resulting image quality of the art deteriorates significantly. Regardless, Glaze released a statement promising to use this attack to improve their algorithm and continue to release protective software.

 

Looking Forward

Artists like Autumn Beverly are hoping that tools like The Glaze Project will make companies more cautious when stealing their art. Instead of having the option to exploit their hard work, AI companies will hopefully fairly compensate creators through royalties. And at least for now, Glaze and Nightshade do not seem like they will disappoint.