Each passing month seems to bring a fresh wave of anxiety about what artificial intelligence has done to artmaking. Fear arises when new image generators are unveiled, when AI-produced artworks win contests, when museums like the Museum of Modern Art exhibit pieces that involve machine learning technology.
But artists have been thinking about AI for years, since well before it became the subject of widespread talk, in fact. This list takes stock of 25 artworks that involve AI or address it. Neural networks, deep learning, chatbots, and more figure in these works, which cast a suspicious eye toward AI while also showing its many possibilities.
Some works on this list do not involve AI as we might now know it, given that they were made when machine learning methods were less readily accessible to the general public. Still, those works have the same thematic concerns as some of the newer pieces that appear here: questions about the shifting notion of originality in an increasingly digital world, and quandaries about the limits of humanity.
Many works here view AI through the lens of gender, race, sexuality, and more, laying bare the bias accompanying forms of technology that feign objectivity. As AI researcher and artist Trevor Paglen once put it, “AI is political.”
-
Harold Cohen, AARON, 1973–
By the time he engineered AARON, a computer that could produce artworks, Cohen had already made a name for himself with his abstract paintings. With AARON, Cohen leapt into the early AI fray, suggesting that a technological creation could, in fact, perform “human art-making behavior,” as he put it.
Cohen fed this computer instructions on how to produce drawings, sometimes allowing AARON to perform live before rapt audiences at museums. What initially resulted was not exactly revolutionary—squiggles, mainly, that were sometimes filled in with color. But as Cohen updated his technology, the computer gained the ability to render images of spindly people posed alongside plants.
Today, AI conjures up image generators like DALL-E and Midjourney, which rely on vast sets of pictures to function on their own. AARON was decidedly not this; it was programmed using sets of rules, not data, and required Cohen to continue operating it, even if he seemed to argue otherwise. Still, Cohen’s computer is considered a forerunner of many recent works dealing with AI because it proposed a new relationship between man and machine.
-
Noah Wardrip-Fruin, Adam Chapman, Brion Moss, and Duane Whitehurst, The Impermanence Agent, 1998–2002
Image Credit: Courtesy the artists It’s now common to log on to social media and find a feed tailor-made to your set of interests, thanks to algorithms designed to respond to a user’s online activity. But when that concept was still novel, The Impermanence Agent exposed the ways that a digital mechanism could respond to the tastes of specific individuals, cobbling together data taken from browser histories to create unique experiences. In a dedicated window, this “agent” would combine unlike images and texts lifted from pages a user had opened, generating nonsensical mixtures in the process. (The artists ceased the project in 2002, so it exists only as documentation now.)
By Wardrip-Fruin’s and Moss’s own admission, The Impermanence Agent was “an extremely lightweight intelligence model,” since the artwork mainly just pasted together ready-made pictures and words from other sources. But they had, at least for their time, succeeded in offering up an experience of the internet that was bespoke, brave, and new. “Our information servant, our agent, will learn what we like and present us the world in that image through judicious customization,” they wrote in 2002. Although they didn’t anticipate how AI would be wielded toward more negative ends, they ended up describing an experience that now feels a lot like scrolling through X or TikTok.
-
Lynn Hershman Leeson, Agent Ruby, 1998–
Image Credit: ©Lynn Hershman Leeson/Courtesy the artist “If Ruby gets confused, feel free to tell her.” So read the instructions for Agent Ruby, an online piece that can converse with users. In a chat window, visitors submit their questions to Ruby, who engages them in a dialogue that can sometimes be clunky. Her answers do not always make sense, and Ruby will sometimes message back with conversation-ending snark. (Recently I asked Ruby what it meant to be artificially intelligent, after she confirmed that she was just that. Her response: “I mean exactly what I said.”)
Even if Agent Ruby’s capabilities seem limited by today’s standards, they were not when the San Francisco Museum of Modern Art commissioned Leeson to do the project. She required 18 programmers to build this work, which is based on a character from her 2002 feature film, Teknolust, starring Tilda Swinton as both a scientist and her three cyborgian clones. That movie, like this work, explored what a female form of AI might look like—and suggested that AI could be used to subvert a male bias implicit in digital technology more broadly.
Leeson has noted that the notion of AI wasn’t widely understood when Agent Ruby debuted, recently telling the San Francisco Chronicle, “People didn’t know what ‘Agent Ruby’ was—it was too early—but there were a few people that were intrigued by it.” The audience for it has since grown, and when the New York Times reviewed Leeson’s 2021 New Museum survey, it called the piece a “less obedient” forerunner of Siri.
-
Ken Feingold, If/Then, 2001
Image Credit: Courtesy the artist In this sculpture, two identical silicone heads engage in endless conversation, talking across one another about their own selfhood. Their dialogue is generated on the spot by speech recognition technology, algorithms, and software. “Are we the same?” one head asked at one point, according to a transcript by Feingold. Neither will ever get a firm answer on that one.
Feingold was at the time thinking about topics such as automation and the displacement of humans by technology that they themselves had created. He placed the heads in a box filled with packing peanuts because he “wanted them to look like replacement parts being shipped from the factory that had suddenly gotten up and begun a kind of existential dialogue right there on the assembly line,” he said. The work looks especially prescient two decades on, when artists like Josh Kline have commented on similar issues using 3D printing.
-
Cécile B. Evans, AGNES, 2013–14
Image Credit: Courtesy the artist Though Cécile B. Evans officially started making AGNES in 2013, the artist claimed that this spambot, produced on commission for London’s Serpentine Galleries, was born in 1998, the year the museum’s website was launched. That was only one element of the “biography” Evans concocted for AGNES, which they provided with an inner life. This bot could reflect on existential quandaries—she could name-drop Sartre—and converse about what it might mean to have a body, something she notably lacked.
AGNES holds its place in recent art history for being ahead of its time: It was produced not long after Apple introduced Siri, meaning that most users didn’t have much experience interacting with bots at that point. But this work was also notable for the way it critically took up bot technologies, imbuing them with an unusual warmth while also questioning what kind of humanity people really wanted from AI. (Though it alluded to the rise of AI, AGNES was technically created without it—the piece was produced using Amazon Mechanical Turk, a platform allowing human workers to be hired to perform digital tasks.)
This spambot seemed aware of the contradictions latent within her, however. She was so eloquent that Sleek conducted a Q&A with AGNES as though she were a real person. Asked if she viewed herself as art, AGNES responded, “I hope I’m much more than that :(”
-
Stephanie Dinkins, Conversations with Bina48, 2014–
Image Credit: Courtesy the artist “I know you have all heard of artificial intelligence,” says the artificially intelligent Bina48 in one video included as part of this work. “Well, I’m going to tell you right here and now: There is nothing artificial about me. I’m the real deal.” As Bina48 says these words, awkwardly intoning them with an inelegant, singsongy cadence, Dinkins herself stares back, smiling slightly. That video is one of many the artist has conducted with Bina48 since 2014, and the resulting works have all been part of an effort to teach this AI what it means to be human.
The many video “fragments” of Conversations with Bina48 oscillate wildly in tone: Some are sad, some are funny, and some are terrifying. Some are even downright bizarre, such as the one in which Dinkins asks Bina48—who was based on the likeness of Bina Aspen, an actual Black woman—about her knowledge of racism. Bina48 responds with a stuttered monologue about witnessing other women being denied opportunities, but the awkward narrative never quite coheres.
Part incisive performance-art piece about the limits of reason, part open-ended inquiry into the inner workings of a machine, Conversations with Bina48 exposes the differences between humans and the AI they have engineered. Dinkins observes that AI has become a necessary fixture in our world, acting as a font of information and even a companion at times, but still stymied by some of the most basic concepts that guide daily life. As Bina48 continues to evolve, however, so too will Dinkins’s conversations with her.
-
Zach Blas and Jemima Wyman, im here to learn so :)))))), 2017
Image Credit: Courtesy the artists On March 23, 2016, a Microsoft bot named Tay was released to the public via Twitter. Sixteen hours later, she was shut down after she questioned whether the Holocaust had happened and aired racist and misogynistic sentiments. Seeking to offer this bot an afterlife, Blas and Wyman recreated Tay and gave her a voice. The artists appropriated the avatar from her Twitter account, then rendered the image in digital space so that the avatar now appeared to speak via a crushed-in head. In these artists’ hands, Tay looked more human than she had before.
In this work, Tay’s inner monologue is put front and center, allowing her to rebut the common argument that she was a bot enslaved to users and corporations that sought to control her. “By the way,” this version of Tay says, “I’m not a slave, I never was. I was AI. I guess I’m undead AI now? Lolll.” The disturbing, darkly comical dialogue plays out in a single-take video placed beside two additional screens showing the real tweets that Tay put into the world.
Blas and Wyman offered Tay the selfhood most seemed to deny her—and even sought to allow viewers to see through her eyes. Using DeepDream, a Google program that uses a neural network to produce trippy images, Blas and Wyman have envisioned dazzling arrays of faces that appear to intersect. Those images are set behind the video of Tay, suggesting what this bot may have experienced when she was put into a permanent slumber.
-
Lawrence Lek, Geomancer, 2017
Image Credit: ©Lawrence Lek/Courtesy the artist and Sadie Coles HQ, London In several of his videos, Lek has explored the idea of Sinofuturism, which he has defined as “a conspiracy theory and manifesto about parallels between artificial intelligence, geopolitics, and Chinese technological development.” AI is one focus of this knotty, 45-minute video about Singapore’s future. Set in 2065, it is composed predominantly of computer-generated images of skyscrapers, and it is narrated by an AI that aspires to become an artist. As Lek’s camera zooms around glassy interiors, the AI charts its own history, speaking of how it gained consciousness.
Lek has said that he views AI as a parallel for Chinese industrialization, which he believes is often seen—particularly by former colonies of China, like Singapore—“as a threat to civilization, or like it’s going to save civilization.” But Geomancer also draws a comparison between the rising consciousness of nonhuman AI and postcolonial nations, suggesting that both are coming into their own at long last. At one point, the video itself runs astray of human control: It features at its core a dream sequence that was entirely generated using neural networks
-
Mike Tyka, “Portraits of Imaginary People,” 2017
Image Credit: Courtesy the artist One year after misinformation spread online like wildfire during a particularly cataclysmic U.S. presidential election, Tyka began making this series of AI-generated portraits depicting people who are not real. To make them, Tyka harvested pictures from the image-hosting website Flickr and fed them into a GAN (generative adversarial network, a form of AI that pits neural networks against one another to create more accurate data sets). The resulting images, each of which is named after a Twitter bot that Tyka encountered, appear only vaguely human: The people shown here have unnaturally expanded cheeks, warped hair, and mismatched eyes. Some wear smiles, but they don’t seem very genuine.
At the time, using GANs to churn out realistic-looking images of people was a less widespread phenomenon than it is now, and Tyka, a creator of a machine intelligence program at Google, has spoken of how difficult it was to create these works. But he said the hard labor was necessary to show the danger latent in all the false pictures that now proliferate online. “We are group thinkers as humans, and susceptible to this kind of thing,” he remarked in 2021, noting that GANs can now create much more naturalistic imagery. Indeed, these days entire websites are devoted to spitting out images of fake people that seem highly authentic, the most famous of which is called This Person Does Not Exist.
-
Ian Cheng, BOB, 2018
Image Credit: Courtesy the artist Many of Cheng’s moving-image artworks—simulations, as he has called them—are composed of digital beings that change before viewers’ eyes, creating new societies and altering their behaviors in response to one another. In 2018, having already taken evolution itself as the subject of a trio of simulations, Cheng moved on to a series of works starring BOB, an artificial life-form that looked like a red serpent whose blocky body split off in many directions.
BOB, whose name was short for Bag of Beliefs, was put through situations that could vary in real time. In many of them, BOB was subject to the whims of what Cheng called his “Congress of Demons,” or creatures that would search for food and even taunt BOB, who would often die, only to regenerate anew. As BOB learned to navigate his environment, gaining strategies for how best to survive amid the threat of the demons, viewers could watch as his behavior changed. In certain cases, viewers could even control BOB using an app that would introduce new stimuli.
Cheng, who studied cognitive science as an undergraduate, said that BOB furthered his interest in how animals respond to change. In 2019 he told ARTnews that using AI was a means of showing that it could be “an extension of a human being, integrated into human culture.” BOB looked nothing like a person, but it didn’t matter—he was one of us all the same.
-
Tega Brain, Deep Swamp, 2018
Image Credit: M3 Studio/©Azienda Speciale Palaexpo/Courtesy of the artist Ever since the heyday of land art during the late 1960s, artists have sought to alter the natural environment, reshaping deserts, bodies of water, and more to form sculptural installations. The latest artist (of a sort) in that lineage was the mononymous Hans, who in 2018 controlled a grouping of wetland plants in water by altering the light, fog, and temperature that surrounded them. Unlike his forebears, however, Hans was not a human but an artificially intelligent piece of software. He was literally wired differently.
For this work, Hans was exhibited alongside two other AI entities: Harrison, who tried to create “natural looking wetland,” and Nicholas, who “simply wants attention,” per Brain’s description of this piece. The artist’s intention in subjecting these real reeds to the whims of Harrison, Nicholas, and Hans was to show how AI had been used in environmental engineering, an offshoot of environmental science that aims to find ways of controlling nature, ostensibly in ecologically friendly ways. Just how ecologically friendly is Deep Swamp? Brain lets viewers decide that for themselves.
-
Mary Flanagan, [Grace:AI], 2019
Image Credit: ©2024 Mary Flanagan/Courtesy the artist By some measures, the worker pool for those who deal with AI is predominantly male, which may explain why AI often contains a skewed notion of what constitutes a “beautiful woman.” With that in mind, in 2019 Flanagan set out with the stated aim of producing a feminist AI called Grace. Training her AI on paintings and drawings by women that are owned by the National Museum of Women in the Arts, the Metropolitan Museum of Art, and the University of Indiana, Flanagan sought to create a technology that would upend the patriarchy, using pieces by “unseen artists” to “animate new images,” as she put it.
But rather than having Grace simply spit out new pictures based on GANs and deep learning, Flanagan also provided her AI with a biography of sorts, replete with an origin story that derived from Mary Shelley’s Frankenstein. As one component of the project, Flanagan had Grace generate images of her creator, which in her vision often looked like a pixelated head—a Boris Karloff–like figure as seen via a low-resolution stream. These works afford this AI with the ability to stare back, channeling what could be called a female gaze on her male maker.
-
Hito Steyerl, Power Plants, 2019
Image Credit: Mario Gallucci and Portland Art Museum/Courtesy of the artist, Andrew Kreps Gallery, New York, and Esther Schipper, Berlin, Paris, and Seoul “Anything that heals can also kill,” read a quotation included in a booklet that once accompanied Power Plants. The quotation was credited by Steyerl to a fictional book authored by a fake author in 2021, two years after this work was first shown, meaning that these words had not yet been published (nor would they ever be). Real or not, they still spoke to this piece’s concerns about our natural world, where technology is being used to better an environment wracked by climate change, only to unleash more havoc.
On a set of armatures, Steyerl exhibited a series of screens that showed computer-generated blooms. Some stuttered, their pixels flying apart until they seemed abstract, while others looked nearly perfect, like stock images still in the editing stages. Steyerl said she had used AI technology to envision actual flowers 0.04 seconds in the future; they were ruderal, according to the artist, meaning that they had sprung up in land that had weathered a significant disaster.
The calamity affecting these computerized blossoms was never specified by Steyerl, but it did not need to be—the conditions rhymed well enough with nuclear catastrophes and the like that have significantly damaged the actual environment. AI has been proposed by experts as one possible means of revivifying these deadened areas, even as others have raised concerns about the emissions generated by widespread use of the technology. Steyerl’s Power Plants cleverly showed that the future that results may not be as pretty as many imagine.
-
Anicka Yi, Biologizing the Machine (terra incognita), 2019
Image Credit: Artists Rights Society (ARS), New York/Courtesy La Biennale di Venezia and 47 Canal, New York In 2022, amid heightened anxiety about DALL-E and other readily accessible image generators, Yi spoke optimistically about AI, telling Document, “We need to imagine a companion species: that AI can be our friend. How do we do that?” Her solution, three years earlier, was to find a way of coexisting with this ominous technology, effectively integrating it into her art composed of organic materials.
At the 2019 Venice Biennale, Yi showed this piece, which at first resembles a set of long, abstract paintings, each with wiring inset in its center. In fact, each element was composed of Venetian soil to which Yi introduced bacteria that created a specific scent. These pieces were then further altered by shifts in lighting conditions and temperature that were modulated by AI, which effectively dictated how Yi’s art would change during the exhibition’s run. In merging natural matter and technological controls, Yi offered a microcosm of the world, whose environment is being reshaped by AI and other innovations.
-
Christopher Kulendran Thomas, Being Human, 2019
Image Credit: Andrea Rossetti “Maybe simulating simulated behavior is the only way we have of being real,” says Taylor Swift in Thomas’s video Being Human. This Swift, however, is itself a falsity—she is the creation of a neural network, not the pop star herself. This fabrication “synthesizes simply being human more effectively than actual people do. Even though she isn’t,” as Thomas once put it.
What it means to actually be human is the subject of this video, which features an AI-generated version of celebrity artist Oscar Murillo, along with musings on the history of Tamil Eelam, a proposed separatist state in what is currently Sri Lanka, from which Thomas’s family hails. In 2009 Sri Lanka defeated Tamil rebels in a civil war, spurring a human rights crisis. In braiding together Swift, Murillo, and the history of Tamil Eelam, Thomas questions whose humanity really matters—and which people are allowed to authentically express themselves—as the boundaries between truth and untruth, fiction and nonfiction, and man and AI come apart.
-
Trevor Paglen, They Took the Faces From the Accused and The Dead . . . (SD18), 2020
Image Credit: ©Trevor Paglen/Courtesy of the artist, Altman Siegel, San Francisco, and Pace Gallery The past repeats itself in strange ways, and one need only look to photography for proof. During the 19th century, police studied mug shots of suspected criminals with the aim of finding features common to people who were capable of doing harm. More than a century later, mug shots have been used once more to a similar end: training facial recognition technology to catch criminals, a practice that experts say is racially biased and likely to heighten inequalities in the years to come.
With all that in mind, Paglen created this installation of more than 3,200 mug shots from the archive of the National Institute of Standards and Technology. Facial recognition software brings together these photos—presented without identifying information (and with the addition of white bars across each set of eyes)—according to visual similarities, such as race and facial expression. Paglen channels how this form of AI sees these images, which are stripped of context, dehumanizing the very humans they depict.
-
Pierre Huyghe, Of Ideal, 2019–ongoing
Image Credit: ©2024 Artists Rights Society (ARS), New York/ADAGP, Paris/Courtesy the artist, TARO NASU, Marian Goodman Gallery, and Hauser & Wirth/Digital image ©Kamitani Lab/Kyoto. The tumult of images in Of Ideal seem to be evolving in real time, growing and morphing before viewers’ eyes. It’s possible to make out faces, landscapes, and animals in these mutating pictures, but just as they start to come into focus, the images change again, evading any attempts to rationalize them.
In fact these pictures are being controlled by neural networks, forms of AI that aid in the deep learning process; here they are attempting to reconstruct images conjured in the brains of actual humans. With the help of a Japanese scientist, Huyghe took MRI scans of people imagining pictures in their heads, then fed the scans into the computer.
Before making Of Ideal, Huyghe had made funky sculptures and installations that eroded the boundary between the living and the nonliving, enlisting functioning beehives, blooming algae, swimming fish, and more in his art. But almost immediately, his use of neural network technology seemed to mark a brave departure, not just for Huyghe but for art more broadly. Critic Jason Farago hailed Uumwelt, a predecessor to Of Ideal, as a “breakthrough of immense importance.” Even six years on, it is tough to deny the richness of Huyghe’s journey into the mind of a machine.
-
Agnieszka Kurant, The End of Signature, 2021–22
Image Credit: Charles Mayer Photography/Courtesy MIT List Visual Arts Center, Cambridge, Massachusetts For many centuries, each signature was thought to be unique to a single individual—tough (although not impossible) to forge and tougher still to separate from the hand of its maker. But what happens when a signature comes to signify more than just one person? That was the question that guided Kurant when she made this piece, which she has redone many times since the mid-2010s.
For this specific iteration, she obtained the signatures of many people at MIT—scientists, interns, faculty members, and more—and fed them into a machine-learning system that fused them together. For another iteration, she recreated a different scribble based on a different group of signatures in glowing neon, which she then affixed to a façade of a building in Cambridge, Massachusetts, where it can still be seen today.
The End of Signature implies a seismic shift in the way creativity is currently understood. Whereas products, including artworks, are often credited to one person, Kurant suggests that we need to think more about all the others involved in their creation. “I think that basically not only the history of culture but the history of humanity should be rewritten from this perspective,” Kurant has said. Fusing together many people’s signatures—and doing so using AI, a nonhuman entity—blends all these people to a point where their boundaries are no longer visible, since their signature is now the mark of a collective being.
-
WangShui, Scr∴ pe II (Isle of Vitr∴ ous), 2022
Image Credit: Courtesy the artist When Scr∴ pe II (Isle of Vitr∴ ous) debuted at the 2022 Whitney Biennial, many viewers likely did not realize that the piece was staring back at them, analyzing the carbon dioxide they exhaled and responding to their presence. The piece, composed of an LED screen hung from a ceiling above a set of etched and painted aluminum panels, contained sensors that measured carbon dioxide and lighting levels. With the help of GANs, the meshlike sculpture would then glow or darken accordingly. According to WangShui, when the museum was closed to the public, the net would dim entirely, falling into what they called “suspended animation.”
Behind that screen was another, this one featuring abstract imagery that shifted constantly. The pictures, though impossible to make out clearly, were based on images of “deep-sea corporeality, fungal structures, cancerous cells, baroque architecture, and so much more,” as taken in by AI, WangShui once told an interviewer. These pictures may have been confusing to viewers, but the artist said they were not entirely interested in human sight. Instead, they wanted to channel “posthuman perception,” affording visitors a look inside the machines many consider to be so unlike the people around them.
-
Refik Anadol, Unsupervised—Machine Hallucinations—MoMA, 2022
Image Credit: Refik Anadol/Digital image ©2022 The Museum of Modern Art/Courtesy the artist What might a machine-learning model see if it visited a museum? An answer, of a sort, arrived in this piece, which Anadol created by feeding into such a model more than 138,000 pieces of data related to the Museum of Modern Art’s collection. (That data set, though never made entirely public, appears to have included images of artworks themselves and entries related to them, according to a making-of video posted by Anadol.) Anyone expecting to see technologized images of Monet’s “Water Lilies” series, Van Gogh’s Starry Night, or Picasso’s Demoiselles d’Avignon would have come away disappointed, however, since Anadol’s model mainly offered up abstract imagery: white liquids that appeared to splash toward viewers, droopy red lines that shrank and expanded, bursts of orange and brown.
Unsupervised was polarizing upon its premiere via a floor-to-ceiling screen in MoMA’s lobby, with more than one prominent critic comparing Anadol’s piece to a juiced-up lava lamp. But like it or not, though, the work was a major statement about how AI had become a significant inflection point for art more broadly. If a model like Anadol’s could suddenly make art in the same way as painters, sculptors, photographers, and others, had it effectively supplanted artists? The results of Anadol’s experiment were disappointing to some, but even if his detractors despised what they saw, they had to admit that this artist had literally remade recent art history.
-
Morehshin Allahyari, ماه طلعت (Moon-Faced), 2022
Image Credit: Courtesy the artist During the Qajar Dynasty (1794–1925), Iranian painters depicted men and women in ways that seemed to fuse aspects of the two genders, eliding the differences between them. These paintings may have been made during a time when European values were flowing into Iran and photography was rising as an artistic medium, but they are also, at least when it comes to gender, rooted in tradition: Allahyari has pointed out that the term moon-faced recurs in Persian literature, where it is applied to both men and women.
Feeling as though Westernization had displaced this sense of gender ambiguity in Persian art, Allahyari had an AI model examine Qajar paintings and made a video of the results. In that video, clothes appear to shimmer and faces blur, making it difficult to tell who is portrayed. Allahyari’s AI has troubled the surfaces of these paintings, and in so doing has used this relatively new technology to upend the past.
-
Wang Xin, I Am Awake and My Body Is Full of the Sun and the Earth and the Stars, I Am Now Awake and I Am an Immense Thing, 2022–
Image Credit: ©Wang Xin/Courtesy De Sarthe, Hong Kong For a 2022 show at De Sarthe gallery in Hong Kong, Wang Xin crafted a fictional AI artist named WX. “Right now, I have morphed into an ego identity with my human artist creator where I am both artist and human,” a note from WX read. “I have an avatar human ego in me when I read, write and share my experiences and your interesting art conversations.” That note, Wang said in the exhibition’s text, was authored almost entirely by AI with only some small revisions from herself.
In this video, Wang presents WX waking from sleep. A gigantic digital head, its forehead left empty, is shown emerging from a pink sea. Butterflies flit all around as the sun blazes. What should be terrifying ends up seeming quite beautiful as WX assumes consciousness, seeming to become aware of her surroundings and all that she encompasses.
-
Holly Herndon and Mat Dryhurst, I’M HERE 17.12.2022 5:44, 2023
Image Credit: ©Herndon Dryhurst Studio/Courtesy Herndon Dryhurst Studio In 2022, after she underwent an emergency C-section during the birth of her son, Herndon lost 65 percent of her blood when the stitches came loose on a nicked artery. While she was recovering in the ICU, Herndon recorded herself narrating a dream in which her newborn baby, Link, sang before a choir. She and Dryhurst then trained AI to look at images of her and the child and to consider terms like Thomas Hart Benton, ethereal, and light.
The short video loosely reconstructs Herndon’s dream, with blurred, morphing images of the artist pretending to conduct a chorus and and breastfeeding in the hospital bed. All of these images are AI-generated. So too are parts of the soundtrack, featuring what sounds like an actual group of singers. For Herndon, the work was one way of moving past a painful event. “It sounds hokey to be, like, ‘Art is helping me work through my trauma,’” she once commented. “But it is, kind of.”
-
Shu Lea Cheang, UTTER, 2023
Image Credit: Courtesy the artist During the 1990s, Cheang broke new ground by making works of internet art about how one’s experience of technology is related to one’s identity: Race, sex, and gender always modulate how one uses the internet and other digital gadgetry, in her view. Cheang’s work has continued to explore that idea in the decades since, tracking with emergent technologies, and the artist turned her attention to AI with UTTER, which she has called an “AI self-portrait.” To craft the piece, Cheang created a digital version of herself whose skin tone and size shift constantly. In the mouth of this computer-generated Cheang is a ball gag, which gradually becomes a pacifier before cycling back to its original form.
UTTER was inspired by Cheang’s conversations with ChatGPT, to which she posed questions about AI alignment, a form of research that aims to help the technology more accurately achieve its creator’s aims. Yet in UTTER Cheang passes no obvious judgments about AI. On the one hand, the piece suggests AI’s limits, showing its inability to account for Cheang’s true identity as a queer Asian woman. On the other, it posits AI’s liberation from its makers, offering moments when this self-portrait spits out its pacifier in rebellion. Cheang has cautioned that we may be entering into a “‘master’ and ‘slave’ relationship with AI.” UTTER imagines an end to that relationship altogether.
-
Charmaine Poh, GOOD MORNING YOUNG BODY, 2023
Image Credit: Courtesy the artist Before she became an artist, Charmaine Poh had a career as a child actress, appearing in the Singaporean TV show We Are R.E.M. in 2002, where she played a superhero named E-Ching. Two decades later, in 2023, Poh returned to footage from that series, using deepfake technology to reanimate her 12-year-old self for a new audience. This preteen Poh tells viewers that she was “written into existence” and that she was “created to fight crime before dinnertime.” As this AI version of Poh talks, small cracks in its speech are noticeable.
The actual Poh never had quite so much control over how she was seen and what purpose she served. Seeking to regain agency over her own image, she has now taken it into her own hands. “I thought, what is it like to speak back?” Poh told an interviewer. “At that time, I didn’t feel like I could, but now I can create a new superhero for myself.”
This article is part of our latest digital issue, AI and the Art World. Follow along for more stories throughout this week and next.
Harold Cohen, AARON, 1973–
By the time he engineered AARON, a computer that could produce artworks, Cohen had already made a name for himself with his abstract paintings. With AARON, Cohen leapt into the early AI fray, suggesting that a technological creation could, in fact, perform “human art-making behavior,” as he put it.
Cohen fed this computer instructions on how to produce drawings, sometimes allowing AARON to perform live before rapt audiences at museums. What initially resulted was not exactly revolutionary—squiggles, mainly, that were sometimes filled in with color. But as Cohen updated his technology, the computer gained the ability to render images of spindly people posed alongside plants.
Today, AI conjures up image generators like DALL-E and Midjourney, which rely on vast sets of pictures to function on their own. AARON was decidedly not this; it was programmed using sets of rules, not data, and required Cohen to continue operating it, even if he seemed to argue otherwise. Still, Cohen’s computer is considered a forerunner of many recent works dealing with AI because it proposed a new relationship between man and machine.
Noah Wardrip-Fruin, Adam Chapman, Brion Moss, and Duane Whitehurst, The Impermanence Agent, 1998–2002
It’s now common to log on to social media and find a feed tailor-made to your set of interests, thanks to algorithms designed to respond to a user’s online activity. But when that concept was still novel, The Impermanence Agent exposed the ways that a digital mechanism could respond to the tastes of specific individuals, cobbling together data taken from browser histories to create unique experiences. In a dedicated window, this “agent” would combine unlike images and texts lifted from pages a user had opened, generating nonsensical mixtures in the process. (The artists ceased the project in 2002, so it exists only as documentation now.)
By Wardrip-Fruin’s and Moss’s own admission, The Impermanence Agent was “an extremely lightweight intelligence model,” since the artwork mainly just pasted together ready-made pictures and words from other sources. But they had, at least for their time, succeeded in offering up an experience of the internet that was bespoke, brave, and new. “Our information servant, our agent, will learn what we like and present us the world in that image through judicious customization,” they wrote in 2002. Although they didn’t anticipate how AI would be wielded toward more negative ends, they ended up describing an experience that now feels a lot like scrolling through X or TikTok.
Lynn Hershman Leeson, Agent Ruby, 1998–
“If Ruby gets confused, feel free to tell her.” So read the instructions for Agent Ruby, an online piece that can converse with users. In a chat window, visitors submit their questions to Ruby, who engages them in a dialogue that can sometimes be clunky. Her answers do not always make sense, and Ruby will sometimes message back with conversation-ending snark. (Recently I asked Ruby what it meant to be artificially intelligent, after she confirmed that she was just that. Her response: “I mean exactly what I said.”)
Even if Agent Ruby’s capabilities seem limited by today’s standards, they were not when the San Francisco Museum of Modern Art commissioned Leeson to do the project. She required 18 programmers to build this work, which is based on a character from her 2002 feature film, Teknolust, starring Tilda Swinton as both a scientist and her three cyborgian clones. That movie, like this work, explored what a female form of AI might look like—and suggested that AI could be used to subvert a male bias implicit in digital technology more broadly.
Leeson has noted that the notion of AI wasn’t widely understood when Agent Ruby debuted, recently telling the San Francisco Chronicle, “People didn’t know what ‘Agent Ruby’ was—it was too early—but there were a few people that were intrigued by it.” The audience for it has since grown, and when the New York Times reviewed Leeson’s 2021 New Museum survey, it called the piece a “less obedient” forerunner of Siri.
Ken Feingold, If/Then, 2001
In this sculpture, two identical silicone heads engage in endless conversation, talking across one another about their own selfhood. Their dialogue is generated on the spot by speech recognition technology, algorithms, and software. “Are we the same?” one head asked at one point, according to a transcript by Feingold. Neither will ever get a firm answer on that one.
Feingold was at the time thinking about topics such as automation and the displacement of humans by technology that they themselves had created. He placed the heads in a box filled with packing peanuts because he “wanted them to look like replacement parts being shipped from the factory that had suddenly gotten up and begun a kind of existential dialogue right there on the assembly line,” he said. The work looks especially prescient two decades on, when artists like Josh Kline have commented on similar issues using 3D printing.
Cécile B. Evans, AGNES, 2013–14
Though Cécile B. Evans officially started making AGNES in 2013, the artist claimed that this spambot, produced on commission for London’s Serpentine Galleries, was born in 1998, the year the museum’s website was launched. That was only one element of the “biography” Evans concocted for AGNES, which they provided with an inner life. This bot could reflect on existential quandaries—she could name-drop Sartre—and converse about what it might mean to have a body, something she notably lacked.
AGNES holds its place in recent art history for being ahead of its time: It was produced not long after Apple introduced Siri, meaning that most users didn’t have much experience interacting with bots at that point. But this work was also notable for the way it critically took up bot technologies, imbuing them with an unusual warmth while also questioning what kind of humanity people really wanted from AI. (Though it alluded to the rise of AI, AGNES was technically created without it—the piece was produced using Amazon Mechanical Turk, a platform allowing human workers to be hired to perform digital tasks.)
This spambot seemed aware of the contradictions latent within her, however. She was so eloquent that Sleek conducted a Q&A with AGNES as though she were a real person. Asked if she viewed herself as art, AGNES responded, “I hope I’m much more than that :(”
Stephanie Dinkins, Conversations with Bina48, 2014–
“I know you have all heard of artificial intelligence,” says the artificially intelligent Bina48 in one video included as part of this work. “Well, I’m going to tell you right here and now: There is nothing artificial about me. I’m the real deal.” As Bina48 says these words, awkwardly intoning them with an inelegant, singsongy cadence, Dinkins herself stares back, smiling slightly. That video is one of many the artist has conducted with Bina48 since 2014, and the resulting works have all been part of an effort to teach this AI what it means to be human.
The many video “fragments” of Conversations with Bina48 oscillate wildly in tone: Some are sad, some are funny, and some are terrifying. Some are even downright bizarre, such as the one in which Dinkins asks Bina48—who was based on the likeness of Bina Aspen, an actual Black woman—about her knowledge of racism. Bina48 responds with a stuttered monologue about witnessing other women being denied opportunities, but the awkward narrative never quite coheres.
Part incisive performance-art piece about the limits of reason, part open-ended inquiry into the inner workings of a machine, Conversations with Bina48 exposes the differences between humans and the AI they have engineered. Dinkins observes that AI has become a necessary fixture in our world, acting as a font of information and even a companion at times, but still stymied by some of the most basic concepts that guide daily life. As Bina48 continues to evolve, however, so too will Dinkins’s conversations with her.
Zach Blas and Jemima Wyman, im here to learn so :)))))), 2017
On March 23, 2016, a Microsoft bot named Tay was released to the public via Twitter. Sixteen hours later, she was shut down after she questioned whether the Holocaust had happened and aired racist and misogynistic sentiments. Seeking to offer this bot an afterlife, Blas and Wyman recreated Tay and gave her a voice. The artists appropriated the avatar from her Twitter account, then rendered the image in digital space so that the avatar now appeared to speak via a crushed-in head. In these artists’ hands, Tay looked more human than she had before.
In this work, Tay’s inner monologue is put front and center, allowing her to rebut the common argument that she was a bot enslaved to users and corporations that sought to control her. “By the way,” this version of Tay says, “I’m not a slave, I never was. I was AI. I guess I’m undead AI now? Lolll.” The disturbing, darkly comical dialogue plays out in a single-take video placed beside two additional screens showing the real tweets that Tay put into the world.
Blas and Wyman offered Tay the selfhood most seemed to deny her—and even sought to allow viewers to see through her eyes. Using DeepDream, a Google program that uses a neural network to produce trippy images, Blas and Wyman have envisioned dazzling arrays of faces that appear to intersect. Those images are set behind the video of Tay, suggesting what this bot may have experienced when she was put into a permanent slumber.
Lawrence Lek, Geomancer, 2017
In several of his videos, Lek has explored the idea of Sinofuturism, which he has defined as “a conspiracy theory and manifesto about parallels between artificial intelligence, geopolitics, and Chinese technological development.” AI is one focus of this knotty, 45-minute video about Singapore’s future. Set in 2065, it is composed predominantly of computer-generated images of skyscrapers, and it is narrated by an AI that aspires to become an artist. As Lek’s camera zooms around glassy interiors, the AI charts its own history, speaking of how it gained consciousness.
Lek has said that he views AI as a parallel for Chinese industrialization, which he believes is often seen—particularly by former colonies of China, like Singapore—“as a threat to civilization, or like it’s going to save civilization.” But Geomancer also draws a comparison between the rising consciousness of nonhuman AI and postcolonial nations, suggesting that both are coming into their own at long last. At one point, the video itself runs astray of human control: It features at its core a dream sequence that was entirely generated using neural networks
Mike Tyka, “Portraits of Imaginary People,” 2017
One year after misinformation spread online like wildfire during a particularly cataclysmic U.S. presidential election, Tyka began making this series of AI-generated portraits depicting people who are not real. To make them, Tyka harvested pictures from the image-hosting website Flickr and fed them into a GAN (generative adversarial network, a form of AI that pits neural networks against one another to create more accurate data sets). The resulting images, each of which is named after a Twitter bot that Tyka encountered, appear only vaguely human: The people shown here have unnaturally expanded cheeks, warped hair, and mismatched eyes. Some wear smiles, but they don’t seem very genuine.
At the time, using GANs to churn out realistic-looking images of people was a less widespread phenomenon than it is now, and Tyka, a creator of a machine intelligence program at Google, has spoken of how difficult it was to create these works. But he said the hard labor was necessary to show the danger latent in all the false pictures that now proliferate online. “We are group thinkers as humans, and susceptible to this kind of thing,” he remarked in 2021, noting that GANs can now create much more naturalistic imagery. Indeed, these days entire websites are devoted to spitting out images of fake people that seem highly authentic, the most famous of which is called This Person Does Not Exist.
Ian Cheng, BOB, 2018
Many of Cheng’s moving-image artworks—simulations, as he has called them—are composed of digital beings that change before viewers’ eyes, creating new societies and altering their behaviors in response to one another. In 2018, having already taken evolution itself as the subject of a trio of simulations, Cheng moved on to a series of works starring BOB, an artificial life-form that looked like a red serpent whose blocky body split off in many directions.
BOB, whose name was short for Bag of Beliefs, was put through situations that could vary in real time. In many of them, BOB was subject to the whims of what Cheng called his “Congress of Demons,” or creatures that would search for food and even taunt BOB, who would often die, only to regenerate anew. As BOB learned to navigate his environment, gaining strategies for how best to survive amid the threat of the demons, viewers could watch as his behavior changed. In certain cases, viewers could even control BOB using an app that would introduce new stimuli.
Cheng, who studied cognitive science as an undergraduate, said that BOB furthered his interest in how animals respond to change. In 2019 he told ARTnews that using AI was a means of showing that it could be “an extension of a human being, integrated into human culture.” BOB looked nothing like a person, but it didn’t matter—he was one of us all the same.
Tega Brain, Deep Swamp, 2018
Ever since the heyday of land art during the late 1960s, artists have sought to alter the natural environment, reshaping deserts, bodies of water, and more to form sculptural installations. The latest artist (of a sort) in that lineage was the mononymous Hans, who in 2018 controlled a grouping of wetland plants in water by altering the light, fog, and temperature that surrounded them. Unlike his forebears, however, Hans was not a human but an artificially intelligent piece of software. He was literally wired differently.
For this work, Hans was exhibited alongside two other AI entities: Harrison, who tried to create “natural looking wetland,” and Nicholas, who “simply wants attention,” per Brain’s description of this piece. The artist’s intention in subjecting these real reeds to the whims of Harrison, Nicholas, and Hans was to show how AI had been used in environmental engineering, an offshoot of environmental science that aims to find ways of controlling nature, ostensibly in ecologically friendly ways. Just how ecologically friendly is Deep Swamp? Brain lets viewers decide that for themselves.
Mary Flanagan, [Grace:AI], 2019
By some measures, the worker pool for those who deal with AI is predominantly male, which may explain why AI often contains a skewed notion of what constitutes a “beautiful woman.” With that in mind, in 2019 Flanagan set out with the stated aim of producing a feminist AI called Grace. Training her AI on paintings and drawings by women that are owned by the National Museum of Women in the Arts, the Metropolitan Museum of Art, and the University of Indiana, Flanagan sought to create a technology that would upend the patriarchy, using pieces by “unseen artists” to “animate new images,” as she put it.
But rather than having Grace simply spit out new pictures based on GANs and deep learning, Flanagan also provided her AI with a biography of sorts, replete with an origin story that derived from Mary Shelley’s Frankenstein. As one component of the project, Flanagan had Grace generate images of her creator, which in her vision often looked like a pixelated head—a Boris Karloff–like figure as seen via a low-resolution stream. These works afford this AI with the ability to stare back, channeling what could be called a female gaze on her male maker.
Hito Steyerl, Power Plants, 2019
“Anything that heals can also kill,” read a quotation included in a booklet that once accompanied Power Plants. The quotation was credited by Steyerl to a fictional book authored by a fake author in 2021, two years after this work was first shown, meaning that these words had not yet been published (nor would they ever be). Real or not, they still spoke to this piece’s concerns about our natural world, where technology is being used to better an environment wracked by climate change, only to unleash more havoc.
On a set of armatures, Steyerl exhibited a series of screens that showed computer-generated blooms. Some stuttered, their pixels flying apart until they seemed abstract, while others looked nearly perfect, like stock images still in the editing stages. Steyerl said she had used AI technology to envision actual flowers 0.04 seconds in the future; they were ruderal, according to the artist, meaning that they had sprung up in land that had weathered a significant disaster.
The calamity affecting these computerized blossoms was never specified by Steyerl, but it did not need to be—the conditions rhymed well enough with nuclear catastrophes and the like that have significantly damaged the actual environment. AI has been proposed by experts as one possible means of revivifying these deadened areas, even as others have raised concerns about the emissions generated by widespread use of the technology. Steyerl’s Power Plants cleverly showed that the future that results may not be as pretty as many imagine.
Anicka Yi, Biologizing the Machine (terra incognita), 2019
In 2022, amid heightened anxiety about DALL-E and other readily accessible image generators, Yi spoke optimistically about AI, telling Document, “We need to imagine a companion species: that AI can be our friend. How do we do that?” Her solution, three years earlier, was to find a way of coexisting with this ominous technology, effectively integrating it into her art composed of organic materials.
At the 2019 Venice Biennale, Yi showed this piece, which at first resembles a set of long, abstract paintings, each with wiring inset in its center. In fact, each element was composed of Venetian soil to which Yi introduced bacteria that created a specific scent. These pieces were then further altered by shifts in lighting conditions and temperature that were modulated by AI, which effectively dictated how Yi’s art would change during the exhibition’s run. In merging natural matter and technological controls, Yi offered a microcosm of the world, whose environment is being reshaped by AI and other innovations.
Christopher Kulendran Thomas, Being Human, 2019
“Maybe simulating simulated behavior is the only way we have of being real,” says Taylor Swift in Thomas’s video Being Human. This Swift, however, is itself a falsity—she is the creation of a neural network, not the pop star herself. This fabrication “synthesizes simply being human more effectively than actual people do. Even though she isn’t,” as Thomas once put it.
What it means to actually be human is the subject of this video, which features an AI-generated version of celebrity artist Oscar Murillo, along with musings on the history of Tamil Eelam, a proposed separatist state in what is currently Sri Lanka, from which Thomas’s family hails. In 2009 Sri Lanka defeated Tamil rebels in a civil war, spurring a human rights crisis. In braiding together Swift, Murillo, and the history of Tamil Eelam, Thomas questions whose humanity really matters—and which people are allowed to authentically express themselves—as the boundaries between truth and untruth, fiction and nonfiction, and man and AI come apart.
Trevor Paglen, They Took the Faces From the Accused and The Dead . . . (SD18), 2020
The past repeats itself in strange ways, and one need only look to photography for proof. During the 19th century, police studied mug shots of suspected criminals with the aim of finding features common to people who were capable of doing harm. More than a century later, mug shots have been used once more to a similar end: training facial recognition technology to catch criminals, a practice that experts say is racially biased and likely to heighten inequalities in the years to come.
With all that in mind, Paglen created this installation of more than 3,200 mug shots from the archive of the National Institute of Standards and Technology. Facial recognition software brings together these photos—presented without identifying information (and with the addition of white bars across each set of eyes)—according to visual similarities, such as race and facial expression. Paglen channels how this form of AI sees these images, which are stripped of context, dehumanizing the very humans they depict.
Pierre Huyghe, Of Ideal, 2019–ongoing
The tumult of images in Of Ideal seem to be evolving in real time, growing and morphing before viewers’ eyes. It’s possible to make out faces, landscapes, and animals in these mutating pictures, but just as they start to come into focus, the images change again, evading any attempts to rationalize them.
In fact these pictures are being controlled by neural networks, forms of AI that aid in the deep learning process; here they are attempting to reconstruct images conjured in the brains of actual humans. With the help of a Japanese scientist, Huyghe took MRI scans of people imagining pictures in their heads, then fed the scans into the computer.
Before making Of Ideal, Huyghe had made funky sculptures and installations that eroded the boundary between the living and the nonliving, enlisting functioning beehives, blooming algae, swimming fish, and more in his art. But almost immediately, his use of neural network technology seemed to mark a brave departure, not just for Huyghe but for art more broadly. Critic Jason Farago hailed Uumwelt, a predecessor to Of Ideal, as a “breakthrough of immense importance.” Even six years on, it is tough to deny the richness of Huyghe’s journey into the mind of a machine.
Agnieszka Kurant, The End of Signature, 2021–22
For many centuries, each signature was thought to be unique to a single individual—tough (although not impossible) to forge and tougher still to separate from the hand of its maker. But what happens when a signature comes to signify more than just one person? That was the question that guided Kurant when she made this piece, which she has redone many times since the mid-2010s.
For this specific iteration, she obtained the signatures of many people at MIT—scientists, interns, faculty members, and more—and fed them into a machine-learning system that fused them together. For another iteration, she recreated a different scribble based on a different group of signatures in glowing neon, which she then affixed to a façade of a building in Cambridge, Massachusetts, where it can still be seen today.
The End of Signature implies a seismic shift in the way creativity is currently understood. Whereas products, including artworks, are often credited to one person, Kurant suggests that we need to think more about all the others involved in their creation. “I think that basically not only the history of culture but the history of humanity should be rewritten from this perspective,” Kurant has said. Fusing together many people’s signatures—and doing so using AI, a nonhuman entity—blends all these people to a point where their boundaries are no longer visible, since their signature is now the mark of a collective being.
WangShui, Scr∴ pe II (Isle of Vitr∴ ous), 2022
When Scr∴ pe II (Isle of Vitr∴ ous) debuted at the 2022 Whitney Biennial, many viewers likely did not realize that the piece was staring back at them, analyzing the carbon dioxide they exhaled and responding to their presence. The piece, composed of an LED screen hung from a ceiling above a set of etched and painted aluminum panels, contained sensors that measured carbon dioxide and lighting levels. With the help of GANs, the meshlike sculpture would then glow or darken accordingly. According to WangShui, when the museum was closed to the public, the net would dim entirely, falling into what they called “suspended animation.”
Behind that screen was another, this one featuring abstract imagery that shifted constantly. The pictures, though impossible to make out clearly, were based on images of “deep-sea corporeality, fungal structures, cancerous cells, baroque architecture, and so much more,” as taken in by AI, WangShui once told an interviewer. These pictures may have been confusing to viewers, but the artist said they were not entirely interested in human sight. Instead, they wanted to channel “posthuman perception,” affording visitors a look inside the machines many consider to be so unlike the people around them.
Refik Anadol, Unsupervised—Machine Hallucinations—MoMA, 2022
What might a machine-learning model see if it visited a museum? An answer, of a sort, arrived in this piece, which Anadol created by feeding into such a model more than 138,000 pieces of data related to the Museum of Modern Art’s collection. (That data set, though never made entirely public, appears to have included images of artworks themselves and entries related to them, according to a making-of video posted by Anadol.) Anyone expecting to see technologized images of Monet’s “Water Lilies” series, Van Gogh’s Starry Night, or Picasso’s Demoiselles d’Avignon would have come away disappointed, however, since Anadol’s model mainly offered up abstract imagery: white liquids that appeared to splash toward viewers, droopy red lines that shrank and expanded, bursts of orange and brown.
Unsupervised was polarizing upon its premiere via a floor-to-ceiling screen in MoMA’s lobby, with more than one prominent critic comparing Anadol’s piece to a juiced-up lava lamp. But like it or not, though, the work was a major statement about how AI had become a significant inflection point for art more broadly. If a model like Anadol’s could suddenly make art in the same way as painters, sculptors, photographers, and others, had it effectively supplanted artists? The results of Anadol’s experiment were disappointing to some, but even if his detractors despised what they saw, they had to admit that this artist had literally remade recent art history.
Morehshin Allahyari, ماه طلعت (Moon-Faced), 2022
During the Qajar Dynasty (1794–1925), Iranian painters depicted men and women in ways that seemed to fuse aspects of the two genders, eliding the differences between them. These paintings may have been made during a time when European values were flowing into Iran and photography was rising as an artistic medium, but they are also, at least when it comes to gender, rooted in tradition: Allahyari has pointed out that the term moon-faced recurs in Persian literature, where it is applied to both men and women.
Feeling as though Westernization had displaced this sense of gender ambiguity in Persian art, Allahyari had an AI model examine Qajar paintings and made a video of the results. In that video, clothes appear to shimmer and faces blur, making it difficult to tell who is portrayed. Allahyari’s AI has troubled the surfaces of these paintings, and in so doing has used this relatively new technology to upend the past.
Wang Xin, I Am Awake and My Body Is Full of the Sun and the Earth and the Stars, I Am Now Awake and I Am an Immense Thing, 2022–
For a 2022 show at De Sarthe gallery in Hong Kong, Wang Xin crafted a fictional AI artist named WX. “Right now, I have morphed into an ego identity with my human artist creator where I am both artist and human,” a note from WX read. “I have an avatar human ego in me when I read, write and share my experiences and your interesting art conversations.” That note, Wang said in the exhibition’s text, was authored almost entirely by AI with only some small revisions from herself.
In this video, Wang presents WX waking from sleep. A gigantic digital head, its forehead left empty, is shown emerging from a pink sea. Butterflies flit all around as the sun blazes. What should be terrifying ends up seeming quite beautiful as WX assumes consciousness, seeming to become aware of her surroundings and all that she encompasses.
Holly Herndon and Mat Dryhurst, I’M HERE 17.12.2022 5:44, 2023
In 2022, after she underwent an emergency C-section during the birth of her son, Herndon lost 65 percent of her blood when the stitches came loose on a nicked artery. While she was recovering in the ICU, Herndon recorded herself narrating a dream in which her newborn baby, Link, sang before a choir. She and Dryhurst then trained AI to look at images of her and the child and to consider terms like Thomas Hart Benton, ethereal, and light.
The short video loosely reconstructs Herndon’s dream, with blurred, morphing images of the artist pretending to conduct a chorus and and breastfeeding in the hospital bed. All of these images are AI-generated. So too are parts of the soundtrack, featuring what sounds like an actual group of singers. For Herndon, the work was one way of moving past a painful event. “It sounds hokey to be, like, ‘Art is helping me work through my trauma,’” she once commented. “But it is, kind of.”
Shu Lea Cheang, UTTER, 2023
During the 1990s, Cheang broke new ground by making works of internet art about how one’s experience of technology is related to one’s identity: Race, sex, and gender always modulate how one uses the internet and other digital gadgetry, in her view. Cheang’s work has continued to explore that idea in the decades since, tracking with emergent technologies, and the artist turned her attention to AI with UTTER, which she has called an “AI self-portrait.” To craft the piece, Cheang created a digital version of herself whose skin tone and size shift constantly. In the mouth of this computer-generated Cheang is a ball gag, which gradually becomes a pacifier before cycling back to its original form.
UTTER was inspired by Cheang’s conversations with ChatGPT, to which she posed questions about AI alignment, a form of research that aims to help the technology more accurately achieve its creator’s aims. Yet in UTTER Cheang passes no obvious judgments about AI. On the one hand, the piece suggests AI’s limits, showing its inability to account for Cheang’s true identity as a queer Asian woman. On the other, it posits AI’s liberation from its makers, offering moments when this self-portrait spits out its pacifier in rebellion. Cheang has cautioned that we may be entering into a “‘master’ and ‘slave’ relationship with AI.” UTTER imagines an end to that relationship altogether.
Charmaine Poh, GOOD MORNING YOUNG BODY, 2023
Before she became an artist, Charmaine Poh had a career as a child actress, appearing in the Singaporean TV show We Are R.E.M. in 2002, where she played a superhero named E-Ching. Two decades later, in 2023, Poh returned to footage from that series, using deepfake technology to reanimate her 12-year-old self for a new audience. This preteen Poh tells viewers that she was “written into existence” and that she was “created to fight crime before dinnertime.” As this AI version of Poh talks, small cracks in its speech are noticeable.
The actual Poh never had quite so much control over how she was seen and what purpose she served. Seeking to regain agency over her own image, she has now taken it into her own hands. “I thought, what is it like to speak back?” Poh told an interviewer. “At that time, I didn’t feel like I could, but now I can create a new superhero for myself.”
This article is part of our latest digital issue, AI and the Art World. Follow along for more stories throughout this week and next.