The Great AI Art Heist

A lab at the University of Chicago is protecting artists from theft by a new adversary: the machines.

March 4, 2025, 6:00 am

It began with a proverbial pebble at the peak of a proverbial mountain. And it’s why, in the darkness of a Chicago winter’s night, I am transfixed by a virtual menagerie of fantastical creatures.

A winged, three-headed beast breathes blue fire across an arctic tundra. A man-spider, with a bulbous red body, eight hairy legs, and an armor-clad human torso, hovers above an altar stacked high with human skulls and burning candles. A headless mer-creature is swallowed up by a wall, its hands clawing at the wallpaper while a silvery finned tail glistens under a pendant lamp. A landscape blooms across the page in blues and purples, castles overturned and covered by the mist of magic.

I am firmly in the world of Kim Van Deun, a Belgian artist who, after getting her master’s in biology and PhD in veterinary sciences, left academia to dedicate herself to illustration. It’s easy to surmise that her specialty is fantasy, something she confirms once we begin to email back and forth. When she was a kid, she says, her brother came home with The Dark Eye, a role-playing video game similar to Dungeons & Dragons. She found the illustrations of goblins and kobolds intoxicating: “I only wanted to know how you could draw things like that and spend your life dreaming up such monsters.”

Van Deun was able to push that childhood idea aside for years, she says, until it was “beginning to yell and throwing a ruckus in my head.” So she walked away from the promise of a steady paycheck and a pension to pursue the thing that she felt she was meant to do.

It didn’t take long for her to realize that generative artificial intelligence was going to be trouble — or that it would, at the very least, change everything — for independent artists. It was 2022, and generative AI models like ChatGPT were beginning to pique mainstream interest. The app Lensa AI had dazzled social media with the allure of summoning up instantaneous portraits of whomever you wanted, in any style you wanted. No one seemed to be thinking about how it manufactured such creations.

A couple of years earlier, Van Deun had found Fawkes, an image-cloaking software that protects personal privacy against unregulated facial recognition technology. Van Deun, who says she is “very shy,” was looking for ways to shelter the photos of herself that she uploaded to her website. It got her wondering if Fawkes might be able to do for her art what it did for a photograph of her face: give it some level of protection. She decided to cold-email its creators at the University of Chicago’s Security, Algorithms, Networking and Data Lab and ask.

And with that, the proverbial pebble — tottering and tenacious — tipped over the mountaintop and started to pick up speed.

U. of C. Professor Ben Zhao has led SAND Lab’s efforts to thwart AI that exploits artists.

It’s 9:30 a.m. on a Monday in October, and with the exception of Professor Ben Zhao, SAND Lab is empty. Housed in the John Crerar Library, a decidedly modern building among the neo-Gothic ones typical on the U. of C. campus, the lab is austere, almost monastic, with rows of white desks that will soon be occupied by students — no more than a dozen PhDs, and perhaps a handful of undergraduates and high school researchers — working on laptops. Large windows fill the room with sunlight. It’s what you’d expect from any academic space, save for the plentiful art on the walls.

On one of the desks sits a small but hefty trophy. “You’re the first to see it,” Zhao, 49, tells me with a smile. He’s just come back from Pasadena, California, where he accepted the Concept Art Association’s Community Impact Award on behalf of the lab for its work in developing two tools that have made it famous among artists and their advocates: Glaze and Nightshade, software programs that give artists a fighting chance against a growing adversary.

I can tell Zhao is proud. It’s been two years since the CAA hosted a town hall in response to its members’ growing concerns over the impact of generative AI — which means it’s been two years since Zhao raised his hand and offered up the lab’s expertise.

Back in 2022, when Van Deun emailed the lab and found out that Fawkes, in fact, could not work for her art the way she’d hoped, she told the team there about the upcoming meeting. Maybe they could join? As a former academic herself, Van Deun was working off a hunch that if she dangled a problem in front of a group of researchers, they’d be curious enough to stick around and learn more.

Zhao happened to be free that day, so he joined the virtual town hall, hosted by artist Karla Ortiz. A recording of it is still on YouTube and serves as a reminder of both the nascent state of generative AI, already slippery and rapidly evolving, and a crucial moment of reckoning. Watching the video, I get the sense that Ortiz and the other artists were simply trying to disseminate as much information as possible to their peers, in hope of determining some kind of path forward.

At one point, Ortiz showed a chart of the main players in generative AI, all of which use large data sets to train their models. Stability AI, the chart noted, relied on an open-source set named LAION-5B. At the time, LAION-5B had already scraped over 5.8 billion images from the internet. (LAION has said it is simply indexing links, not storing the images.) And because Stability AI had used the images for commercial purposes, it was participating in what Ortiz called data laundering: employing copyrighted data and private artworks to feed the machine in an ostensibly lawful way, leaving the original owners little legal recourse.

Halfway through the meeting, Ortiz introduced Greg Rutkowski, an artist whose work has been commissioned for iconic games like Magic: The Gathering and Dungeons & Dragons. About three months before, his fans had started reaching out to him. Did he know that his name was being used as a generative AI prompt? He didn’t. But when he Googled his name, the images populating the page weren’t ones he had created, but rather copycats generated by AI. “Fake works, signed by my name,” he told those gathered.

When artist Greg Rutkowski Googled his name, the images populating the page weren’t ones he had created, but rather copycats generated by AI. “Fake works, signed by my name,” he told an artists’ town hall.

The experiences other artists at the town hall shared varied in scope, but the stark reality was the same: AI was stealing their work. It was taking away jobs. It was eroding their livelihoods.

By the time the discussion was opened for questions, Zhao was first in line. He wanted to help, he said. He brought up the lab, the work it had done with Fawkes, and suggested a few ideas, including a tool that would turn art into digital junk when it was scraped into a data set.

“Perhaps you guys would like a more cooperative role with the model — for example, you could track how much of your art was trained into the model and therefore you know how much profit could be headed back to you,” he offered. Ortiz smiled. A few months later, in January 2023, she would file a class-action lawsuit (still pending) with two other artists against Midjourney, Stability AI, DeviantArt, and Runway AI.

But first, Ortiz suggested another idea. One that was a little more proactive. Fun, even. “I would love a tool that if someone wrote my name and made it into a prompt, bananas would come out.”

Shawn Shan, a PhD student, coded the algorithm behind SAND Lab’s Glaze. He says he isn’t worried about blowback from Big Tech companies: “There’s so much at stake, my personal risk is less important.”

It wasn’t all bad. At least not at first. Before 2022, in Zhao’s view, AI was mostly a good thing. The professor seems like the kind of person who believes in good things. But when the good things sour, he’s not the kind of person who sits idly by. His X feed is filled with posts and reposts about generative AI that attempt to expose and debunk the mainstream hype surrounding the technology. Or at least it was. In November, Zhao departed the platform formerly known as Twitter (“Left this dumpster fire,” his inactive profile reads). This is also to say that Zhao doesn’t spend time in the murky middle. He picks sides.

Zhao is a professor of computer science at the U. of C., a post he’s held since 2017. He came to Chicago by way of the University of California, Santa Barbara, along with his wife, Professor Heather Zheng, with whom he leads SAND Lab.

But before all that, Zhao was an undergraduate student at Yale who liked to wake up early and walk across campus in the quiet morning hours to the university’s art museum. Usually it would be only him and the security guards, and he’d soak in the majesty of the masterpieces. Years later, in 2017, when he announced that he and Zheng were relocating from Santa Barbara to Chicago, he wrote in a post on the knowledge-sharing platform Quora that among the reasons they were making the move was the access to art and culture here: “For me personally, I can practically LIVE in the Art Institute with [our two daughters]. Heather could bring us food to keep us alive.” A computer scientist first and foremost, but a computer scientist who genuinely loves art.

This was back when one could safely assume that art was being made by humans, and when AI was not just mostly good but ethical, even. When AI largely meant machine learning and deep neural network models (the kind of technology that trained self-driving cars).

But once generative AI went mainstream in 2022, the balance shifted. Suddenly it was possible, with a few keyboard strokes, to create pictures of anything, in any style, including crisp, detailed, photo-like images. That the technology seemed to be getting smarter by the minute only added to the hype — and the money followed. Billions of dollars have been poured into technology that’s steamrolling independent visual artists, voice actors, photographers, writers, and others.

All this masks a simple truth: Generative AI isn’t actually all that smart. “It’s truly the most useless of AI things,” Zhao tells me, leaning back in his chair. “It just really is.” There’s a frustrated resignation in his voice. The kind of resignation that comes when you’ve screamed yourself hoarse and no one’s listening.

It’s important, then, to understand how we got here. Generative AI first took off with the advent of large language models, or LLMs, which allow a user to type a prompt or question into a chatbot and receive a written response. Ask ChatGPT for an itinerary for your trip to Italy, and it will generate a schedule based exclusively on probability. Zhao is quick to remind me — in a way that makes me think he’s had to remind a lot of people — that apps like ChatGPT are far from sentient. When you feed one all the answers to all the questions that exist on the internet, it’s easy to get compelling, if not particularly groundbreaking, answers. Even where to get the perfect cappuccino in Rome. But, Zhao points out, “it literally does not know anything.”

To understand how generative AI evolved to produce images from text prompts, you have to go back to 2017, when machine learning was the main focus of AI research, which manifested as image classification: This is a statue. This is a cat. That kind of thing. Then, after LLMs were popularized, something called text-to-image diffusion models entered the mainstream.

Zhao makes his hands into fists and holds them in front of him. “Imagine two balls connected by a skinny connector,” he begins. “One ball represents everything we understand about how words relate to each other. The other ball represents visual features like color, shape, and texture.”

These diffusion models learn how to put the pieces together, using large data sets filled with images scraped from the internet. So when you ask a program like Midjourney to give you a picture of a cat, it has been trained to make sense of an anticipated pattern: whiskers, ears, fur. Maybe even a collar with a bell. Like LLMs, these diffusion models aren’t particularly smart — they’re just really good at understanding how data is arranged. And that’s because they’ve been trained on billions of images from all corners of the web.

This is what generative AI is: an increasingly sophisticated understanding of the rules that inform data and patterns. It’s also the aggregate of decades of research, which means it’s not just one thing, but a series of things that, combined, create a technology that’s perceived as magic. That there are so many components is also what makes it easy to mess with.

Take the self-driving car, for example. A lot goes into making sure the car drives accurately, that it stays on the road and doesn’t hit people. There’s complex technology (like deep neural networks), but there’s also the more straightforward tech, such as the camera that relays critical information to the car’s computer. If you mess with the camera by, say, covering up a stop sign at an intersection, it doesn’t matter how sophisticated the computer is. If it’s getting bad information, things will go wrong.

Glaze is one of two programs SAND Lab created to hinder AI models from using artists’ work without permission or compensation. The Glaze software prevents style mimicry. Take the example below, in which Glaze is applied to Claude Monet’s Stormy Sea at Étretat before it’s uploaded to the internet. To the human eye, the painting still looks like the original (left), in the French artist’s impressionist style, but AI models perceive an entirely different style — cubism (right).

After a string of unanswered emails, I’m finally on the phone with Shawn Shan, a PhD student at the U. of C. I’ve called him to talk about the work he’s doing on generative AI (or “exploitive generative AI,” as he gently corrects me) at SAND Lab, and I quickly realize that Shan wasn’t ignoring my interview requests. He just gets a lot of them. Both Shan and Zhao have been exceptionally busy this past year, presenting at conferences and giving talks. Zhao recorded a TED Talk in San Francisco in October. Around the same time, Shan was in Germany, delivering a keynote address about their work at the media conference Medientage München. Shortly before that, he was named MIT Technology Review’s Innovator of the Year for his work on Glaze and Nightshade.

Shan, 27, has been with SAND Lab since he was an undergraduate. In 2017, after hearing Zhao give a lecture about drones, 3D, and AI, Shan sent him an email asking if they could work together. Usually it’s difficult for an undergraduate to get a research position in a lab like Zhao’s, but Zhao was new to the university and hadn’t built up his bench yet. Shan was in the right place at the right time, and he was immediately hooked: “I all but stopped going to classes and just stayed in the lab doing research.”

At the time, Zhao and Shan’s research was focused on security and AI. Soon enough, they were developing the image-cloaking technology that would eventually become Fawkes. In June 2022, after Van Deun emailed the lab, it was Shan who put the CAA town hall on Zhao’s calendar. And after the two decided to jump into the fight against generative AI, it was Shan who coded the algorithm behind their first software program meant to fool it.

Zhao and Shan had studied the technology that went into generative AI images and realized that if they could mess with the model training process, they could break the whole system. Put another way: AI machines and the human eye see things differently, so pixel-level changes to images, while largely imperceptible to humans, could drastically alter what the generative AI models interpret and classify. That became the basis for Glaze, the first such tool that SAND Lab released (it can be downloaded free from the lab’s website). This was in March 2023, just four months after the town hall.

Artists can run Glaze on their images before uploading them to the internet. The software makes changes to each pixel, shifting what the generative AI models perceive. For example, an image that has been Glazed could be seen by human eyes as impressionism but as cubism by the generative AI models. If someone then attempts to use an artist’s name as a prompt, Glaze ensures the output looks completely different from the artist’s style.

If generative AI models were going to continue to train on images without consent, Nightshade would make sure those images would teach the machines unexpected and unpredictable behavior. If Glaze was built to be a defensive tool, Nightshade was designed to go on the attack.

When Zhao announced Glaze’s arrival, the response from artists was immediate and enthusiastic. Ortiz, who had given the team her entire catalog to test out the software, soon tweeted an image of one of her paintings, the first ever to have been Glazed. Fittingly, she called it Musa Victoriosa (Victorious Muse), and today it hangs at SAND Lab. Since then, Glaze has been downloaded more than 5 million times.

The success felt good. But it was soon evident that protecting artists at an individual level wouldn’t solve the greater problem of the nonconsensual scraping of images. While SAND Lab researchers had figured out how to confuse generative AI machines, they came to realize there was an opportunity to aim even higher by damaging the data sets used to train the models.

Nightshade, released in early 2024, took a more collective approach. If generative AI models were going to continue to train on images without consent, Nightshade would make sure those images would teach the machines unexpected and unpredictable behavior. If Glaze was built to be a defensive bulwark, Nightshade was designed to go on the attack.

Take an image of a cat. Apply Nightshade to the image, and the AI model will see not a cat but something entirely different — perhaps a chair. Do this to enough images of cats, and gradually the model stops seeing cats and sees only chairs. Ask the same model to generate a picture of a cat, and you get an overstuffed high-back chair instead, maybe even with scrolled wooden feet. While Glaze provides immediate protection on individual images, Nightshade, which has been downloaded more than 2 million times, plays the long game. It poisons the well one image at a time.

Nightshade disrupts AI models by attacking their basic functionality. An AI model is trained using images scraped from data sets, so by manipulating those images, Nightshade can trick it into thinking that one object is actually another — for instance, that a car is a cow. As Nightshade is applied to more and more images of cars, the model will start incorporating them when prompted to generate an image of a car. Over time, the more such images there are in the data sets, the more distorted — and more cow-like — the model’s creations will get.


I have this memory from my childhood. I’m 7, maybe 8, and I’m erasing pencil drawings out of a small, clothbound notebook that I’ve had since kindergarten. I’m filled with frustration, even shame, as I press my eraser down as hard as I can, determined to redo what I had drawn earlier: ugly, misshapen figures — bulbous heads, uneven arms. As a 5-year-old, I’d had no awareness, but at least two years older, I now know better. I’ve learned scale and proportion and need to make things right.

This is my first memory of perfectionism — or more accurately, of what Ira Glass has called “the gap,” that space between what you want to create and your ability to create it. Even then, I knew that had there been something to close that gap, I would have lunged for it. In so many ways, this is the promise of generative AI.

Concerns about what technology does to creativity are hardly new. When photography arrived on the scene two centuries ago, the panic was similar. What would happen to the painted portrait now that someone could sit for a photograph? Today we know that those fears were unfounded, that photography only enhanced the ways in which artists experimented and pushed creativity. But there’s this thing about generative AI that makes it different from photography — and really from any technology that’s touched the creative process to date. It eliminates the blank page. I’ll admit, I hate the lone blinking cursor just like I hated the struggle of not being able to draw what I really wanted to draw as a kid. But it’s worth wondering what happens when we remove the friction from generating those first, oftentimes terrible, ideas.

My friend Jordan Hetzer is a graphic designer in New York City. He regularly uses Midjourney — and he also happens to be one of the most creative people I know. When I ask him if he feels any tension over using generative AI, he shakes his head. “These are aesthetics that aren’t anybody’s. You can easily rip off someone, or you can build off of their aesthetic, massaging it until it feels like something entirely new.” As Mark Twain supposedly put it, there is no such thing as a new idea.

I’m inclined to agree with the sentiment. It doesn’t deny the existence of creative evolution but acknowledges that our creativity is the aggregate of all that’s around us and everything that has come before us. Famously, Picasso copied the styles of other painters as a young artist in Paris; the 1907 retrospective of Paul Cezanne at the Salon d’Automne radically influenced how he thought about form. But I can’t shake the idea that even as Picasso mimicked and copied, first he had to physically pick up a paintbrush.

Practically, removing this starting point — or rather, moving it past the messy scratched-out-and-erased beginning — is already having an impact on creative jobs. Hetzer, an art director, uses generative AI to give him what, as recently as five years ago, a junior designer would have. Why wait for your subordinate to draft 30 ideas for logos when you can get those in minutes from a machine? By saving him time, generative AI helps him get to the good stuff faster. But this makes the state of careers in the arts tenuous, as entry-level positions are gradually eliminated. Art schools are closing. Young artists are changing careers. A whole generation of the creative class is slowly being erased.

The stakes are high, says U. of C. Professor Heather Zheng, the lab’s co-leader: “If you lose faith in [art], how do you continue creativity from a human perspective?”

After the shortcut becomes the only route, what will feed the machine? It’s something Zhao has been thinking about a lot lately. “There’s an interdependence with AI,” Zhao tells me as students start to trickle into the lab. “AI really depends on the future of human artists. And it’s destroying its own future because it’s destroying the pipeline.”

A few days earlier, during a Zoom call with me, Zheng held up a homemade poster. In the middle was a white square with a small human figure curled up in a ball. The background was black with “AI” in large, bright red letters. On top of the letters, in kelly green: “Don’t Generate My Life.” Zheng smiled. It’s the work of one of Zhao and Zheng’s daughters, created for a school assignment to draw something meaningful in her life. I could see Zheng is proud. The poster was born out of conversations they’ve been having as a family, but these words, this way of thinking of the bigger impact of AI, didn’t come from the parents. These were from the kids.

For Zhao and Zheng, it’s all connected — their work in the lab, their family, the problems they seek to solve. Zheng tells me that they regularly get inspiration from their everyday lives. A few years ago, when Zhao considered adding a smart speaker to his shared office with Zheng, she told him no. She had concerns about privacy, specifically that these speakers were, by default, always listening and recording. But instead of Zheng’s rejection closing the door, it made way for a new problem to solve. Could they unplug the device when not in use? No, there’s still a battery. Could they develop some kind of cover? That wouldn’t work either. Instead, Zheng, Zhao, and their SAND Lab, along with U. of C.’s Human Computer Integration Lab, developed the Bracelet of Silence, a wearable piece of technology that disables microphones in the immediate surroundings. If smart speakers could become a worthy opponent, why wouldn’t generative AI serve as a career-defining adversary?

“We only see generative AI in our very mature age,” Zheng told me, putting away the poster. “But kids, they’re being served all this information — the majority of it fake and generative — and it’s hard for them to see their role in it. Why bother to study art? If you lose faith in that, how do you continue creativity from a human perspective?”

“AI really depends on the future of human artists,” Zhao says. “And it’s destroying its own future because it’s destroying the pipeline.”

I think about the already tenuous state of arts funding, especially for schools, and realize that in the naive, perhaps misplaced, excitement for generative AI, we’ve lost this critical piece of the narrative. But as Zheng sees it, kids don’t necessarily need us to recognize the bleak future that’s being created — slowly they’re beginning to understand what’s happening. At least a few of them are. Last year, a group of middle schoolers in Washington, D.C., held a bake sale to raise money for Glaze and Nightshade. For Zhao and Zheng, all support, no matter how small, is important. In the fight against giants, moments of validation, even by way of cookies and muffins, are the crumbs that keep them moving forward.

Almost immediately after Glaze was released, there was pushback. The software was anonymously added to a user-reported list of viruses and was flagged as malware whenever it was downloaded. Zhao pins it on “tech bros,” but no one’s really sure. There are quite a few people who are not thrilled with the lab’s work. Zhao mentions  that conversations at research conferences get awkward; funding for various research projects often comes from Google, Meta, and other powerful companies that are investing heavily in generative AI.

But when I ask Zheng about these adversaries — the people who don’t like the way she and Zhao are disrupting things — she shakes her head, smiling. Ultimately, they don’t matter to her. She points to the poster. “I’d rather see Glaze and Nightshade as a way to tell the young generation that they have agency.”

In 1917, a man using the name Richard Mutt notoriously submitted a porcelain urinal, thrown on its side, to the Society of Independent Artists and called it fine art. In large, drippy letters, the unknown artist signed the urinal (“R. Mutt,” along with the year) and titled it Fountain.

There was immediate shock and confusion. That a factory-made object (a vulgar one at that) could be submitted as art in its unaltered state was cause for outrage. There was no craftsmanship, no technique. Just a urinal.

The French artist Marcel Duchamp, grinning, dripping paintbrush in hand, would eventually take credit for the work. No one really knew what “R. Mutt” meant. Duchamp would later say it was a reference to the popular comic strip Mutt and Jeff. Or a sly nod to the word “ready-made.” Or J.L. Mott Iron Works, the factory that made the urinal. Or a play on French slang for “moneybags.” In the following decades, Duchamp used that paintbrush to sign “reproductions” of Fountain, which made their way, with great acclaim, to galleries and museums after the original was lost.

“I thought the idea behind it was so cool,” says Shan, who took an art history class so he would better understand the fundamentals for his research. The class covered the 20th century — an era characterized by the way it broke apart what was accepted as form and color, redefining what art could be. There was Kasimir Malevich and his white square on white canvas. Later Mark Rothko and his planes of color. Art that would elicit whispers of “I could do that” in museums around the world.

“But it’s not the image that you see,” Shan continues. “It’s the human emotion. It’s the feeling evoked in the audience.”

When I think about art and its relationship with generative AI, I find myself caught in a loop. If art is the human idea, then wouldn’t a human, typing a prompt into something like Midjourney, count as the human effort within the art making?

Or maybe the better question is, What is the correct amount of humanness that makes art art? In the summer of 2022, game designer Jason Allen won first prize for digital art in the Colorado State Fair’s annual fine arts competition with his entry Théâtre D’opéra Spatial. Pairing realism with elements of fantasy, it’s art that’s difficult to describe, which is what makes it so absorbing. It’s the kind of piece that you want to lose yourself in, the way all great art allows. Except it was almost entirely created with generative AI.

When Allen’s entry won the competition, there was the obvious controversy — controversy that was heightened that December when his application to copyright the piece was rejected. It was AI art, the U.S. Copyright Office wrote in its decision, and since the government had already ruled that AI art lacks human authorship, it could not be protected. Even after Allen detailed how he used at least 624 text prompts and input revisions, how he manipulated the raw image in Adobe Photoshop and used Gigapixel AI to increase size and resolution, the office held fast. There was too much machine and not enough human. But if Allen had the idea, does it even matter?

In the months that followed the Fountain’s controversial submission, the Dadaist publication The Blind Man published an editorial defending it: “Whether Mr. Mutt with his own hands made the fountain or not has no importance. He CHOSE it. He took an ordinary article of life, placed it so that its useful significance disappeared under the new title and point of view — created a new thought for that object.”

Perhaps this idea of ideas is only a red herring. It shifts our thinking away from the very important reality that art is being scraped, often without consent, and fed to the AI machines and models — machines and models that are backed by businesses that are very interested in making money and don’t seem interested in compensating the artists from whom they’re stealing. Perhaps we’re fixed on the idea of ideas because wrestling with what art is and isn’t almost seems easier than confronting the massive power imbalance between artist and Big Tech.

After interviewing Zhao at the lab, I immediately felt the need to go look at some art. It was a Monday, so the museums were closed, but as I walked down Woodlawn Avenue, I soon passed the Neubauer Collegium for Culture and Society, which had a giant sign for its gallery positioned at the entrance. It was before noon, the U. of C. campus quiet, and so I walked into a predictably empty building. I was pointed in the direction of a small, dark room, where I found myself sitting, quietly and alone, as a screen flashed in front of me. A video installation by the Otolith Group, Mascon: A Massive Concentration of Black Experiential Energy, was projected onto the wall, running in a continuous loop.

On the screen was a kaleidoscope of images, an almost collage-like mosaic of shapes, along with scenes from the films of Ousmane Sembène and Djibril Diop Mambéty, two prominent Senegalese filmmakers from the ’70s. The stratification of color and sound was brilliant — a layering of ideas and movements and artists and mediums to create something singular and unique. I realized I was holding my breath as a parade of blue sliced across the frame.

Multidisciplinary pieces, like the work of the Otolith Group, have obvious differences from the artworks that Glaze and Nightshade currently protect. But that doesn’t mean that Zhao, Shan, and the rest of the lab aren’t already thinking about how this kind of technology can be used across other media. Dancers, composers, voice actors, writers — they’ve all reached out to Zhao to ask if something like Glaze could be created for their disciplines.

Dancers, composers, voice actors, writers — they’ve all reached out to Zhao to ask if something like Glaze could be created for their disciplines.

But it’s not as simple as creating the same kind of software for other industries. Each has its own set of specific issues related to generative AI. Voice actors, for example, are trying to prevent audio of them from being used without consent (or compensation) to say literally anything in films, commercials, and other media. And I’m all too aware of the surge in AI-generated writing that has flooded the market. In each case, SAND Lab’s researchers know they need to start from the beginning, asking, like they did at the CAA town hall, what each group needs.

Then the lab can look at the AI technology being used and see what opportunities present themselves. Sometimes the answer is relatively straightforward. Right now Stanley Wu, another PhD student at the lab, is looking at how the team could manipulate the vision-language model, the technology that generates descriptive text captions for images, to further confuse AI programs as they classify things. He’s excited — maybe this could be a nice complement to Nightshade. (Also in the works: A combined version of Glaze and Nightshade.)

Call it vigilante justice or guerrilla tactics, but there’s a sense of stalwart duty as the lab attempts to dismantle generative AI. Or at the very least, chip away at it. And it isn’t just artists taking notice: Entertainment companies eager to protect their intellectual property have reached out.

This spring, Shan will begin applying for faculty positions that will likely take him elsewhere. I ask if he’s worried about the career risk associated with working against big technology companies so publicly. Without missing a beat, he says no. “There’s so much at stake, my personal risk is less important.” Plus, there are two sides to the issue. Regulators are now looking for honest feedback about generative AI, independent from places like Meta or Microsoft or Google. Shan smiles. “There are very few technologists who are not associated with these companies.”

I also get the sense that there are very few technologists who are thinking about art so intentionally, every single day. No one at the lab admits to being an artist, but it is clear that all of them deeply appreciate art and understand what’s at stake.

Zheng loves ink wash painting, a traditional Chinese art technique that produces beautiful black-and-white forms. She used to paint when she was younger — she was good, she told me — and finds time, whenever she returns to China, to visit a few artists still practicing the form. It’s meaningful to spend time with them, to see their work. “The years of experience, life experience …” Zheng paused. “It’s putting your emotion, your experience, everything into the painting. You see the painting and you can feel that.”

Zhao had told me that it’s nearly impossible for the average person to discern the difference between human-generated and AI-generated art. He theorizes that in the future there will be a premium on art made by humans — in part because there will be so few practicing artists. It’s a world that Glaze and Nightshade aim to prevent — one where generic, cheap copies are so plentiful and human art so rare that it’s more valuable than diamonds. His team is already working on ways to cryptographically prove that a work of art was created by an actual human.

Meanwhile, even as generative AI has rapidly affected Kim Van Deun’s livelihood, she says she has never once regretted pursuing art as a career. This is what art does to us. Whether we make it or become its patron, it gives us a way to envision a different kind of world, a new way of expression. It gives us hope. Hope that I see as I correspond with Van Deun. She continues to illustrate, to work as an artist. She prefers to use Nightshade when she uploads her work to the internet. Of SAND Lab’s two software offerings (so far), that is the one she anticipates will have the most impact. If she was the pebble that teetered on the mountaintop, perhaps Nightshade is the stone that will fell the giant.

As I make my way through the university’s interconnecting quads, I notice the breeze. How the light dapples the sidewalk. Call it mindfulness, call it basic awareness, but I find myself eager to listen to the rustle of the leaves. At home, I watch my daughter empty her box of crayons. She picks up one, already worn down to a stub, and begins to scribble furiously. She pays no attention to the lines, presses down hard enough to tear the paper. There’s no awareness of perfection or style, just an unadulterated love for the color pink. I know that one day she’ll learn how to draw shapes. And eventually those shapes will begin to mean something. But for now, it’s just messy and chaotic. It’s imperfect. It’s so wildly, and wonderfully, human.