University Of Chicago’s Glaze And Nightshade Offer Artists A Defense Against AI

Image by Glashier, from Freepik

University Of Chicago’s Glaze And Nightshade Offer Artists A Defense Against AI

Reading time: 3 min

In a Rush? Here are the Quick Facts!

  • Glaze and Nightshade protect artists’ work from unauthorized AI training use.
  • Glaze masks images to prevent AI from replicating an artist’s style.
  • Nightshade disrupts AI by adding “poisoned” pixels that corrupt training data.

Artists are fighting back against exploitative AI models with Glaze and Nightshade, two tools developed by Ben Zhao and his team at the University of Chicago’s SAND Lab, as reported today by MIT Technology Review.

These tools aim to protect artists’ work from being used without consent to train AI models, a practice many creators see as theft. Glaze, downloaded over 4 million times since its release in March 2023, masks images by adding subtle changes that prevent AI from learning an artist’s style, says MIT.

Nightshade, an “offensive” counterpart, further disrupts AI models by introducing invisible alterations that can corrupt AI learning if used in training, as noted by MIT.

The tools were inspired by artists’ concerns about the rapid rise of generative AI, which often relies on online images to create new works. MIT reports that fantasy illustrator Karla Ortiz and other creators have voiced fears about losing their livelihoods as AI models replicate their distinct styles without permission or payment.

For artists, posting online is essential for visibility and income, yet many considered removing their work to avoid being scraped for AI training, an act that would hinder their careers, as noted by MIT.

Nightshade, launched a year after Glaze, delivers a more aggressive defense, reports MIT. By adding “poisoned” pixels to images, it disrupts AI training, causing the models to produce distorted results if these images are scraped.

Nightshade’s symbolic effect has resonated with artists, who see it as poetic justice: if their work is stolen for AI training, it can damage the very systems exploiting it.

MIT argues that the tools have faced some skepticism, as artists initially worried about data privacy. To address this, SAND Lab released a version of Glaze that operates offline, ensuring no data transfer and building trust with artists wary of exploitation.

The lab has also recently expanded access by partnering with Cara, a new social platform that prohibits AI-generated content, as noted by MIT.

Zhao and his team aim to shift the power dynamic between individual creators and AI companies.

By offering tools that protect creativity from large corporations, Zhao hopes to empower artists to maintain control over their work and redefine ethical standards around AI and intellectual property, says MIT.

The effort is gaining momentum, but some experts caution that the tools may not offer foolproof protection, as hackers and AI developers explore ways to bypass these safeguards, as noted by MIT.

With Glaze and Nightshade now accessible for free, Zhao’s SAND Lab continues to lead the charge in defending artistic integrity against the expanding influence of AI-driven content creation.

Did you like this article? Rate it!
I hated it I don't really like it It was ok Pretty good! Loved it!

We're thrilled you enjoyed our work!

As a valued reader, would you mind giving us a shoutout on Trustpilot? It's quick and means the world to us. Thank you for being amazing!

Rate us on Trustpilot
0 Voted by 0 users
Title
Comment
Thanks for your feedback
Loader
Please wait 5 minutes before posting another comment.
Comment sent for approval.

Leave a Comment

Loader
Loader Show more...