THE FUTURE IS HERE

A never-ending stream of AI art goes up for auction

Training algorithms to generate art is, in some ways, the easy part. You feed them data, they look for patterns, and they do their best to replicate what they’ve seen. But like all automatons, AI systems are tireless and produce a never-ending stream of images. The tricky part, says German AI artist Mario Klingemann, is knowing what to do with it all.

“For me, this potential is what makes it both interesting and difficult,” Klingemann tells The Verge. “It feels almost wrong to just pick a single thing [that the program produces]. Because, yes, it can create a lot of images, but it’s more magical to see it at work.”

Seeing this process is exactly what Klingemann has achieved with Memories of Passersby I, his video installation that’s due to be auctioned at Sotheby’s this week. This marks the second piece of AI art to be sold at a major auction house. Memories consists of two screens, each using AI to generate a portrait every few seconds. Every image is unique and morphs seamlessly into its successor. It’s like watching a lava lamp made of human faces.


Image: Sotheby’s
Faces from Klingemann’s Memories of Passersby I.

Auctions like this show that after years of fermentation, AI art is moving into the world of high art. But as it does, it invites questions about the nature of art and creativity. What is the relationship between artist and machine? Can AI programs ever really be called creative?

As Klingemann explains, each portrait in Memories of Passersby I is created by a type of AI program known as a generative adversarial network (GAN). These are two-part networks that are trained on huge datasets. The first part of the network (the “generator”) tries to replicate this data, while the second part (the “discriminator”) attempts to distinguish between this output and the real thing.

Images are bounced back and forth between the two modules until the discriminator can no longer tell the difference between the fake data and the original training material. In the case of Memories, this training data was a huge collection of portraits from the 17th, 18th, and 19th centuries, all selected by Klingemann. He also tweaks the standards of the network by adjusting how exacting it is and changing the specific qualities it approves of, thus guiding the output it creates.


Image by Mario Klingemann
Sample faces generated by Klingemann’s GAN.

The images created by GANs have become the defining look of contemporary AI art. The pictures they produce are characterized by diffuse boundaries between different objects. When trained to produce portraits, for example, GANs turn humans into malleable pink dough. In its most extreme examples, ears twist like conches, teeth and eyes multiply, and hair melts into nothingness. The resulting portraits are often compared to those of British painter Francis Bacon, who was known for his grotesque and unsettling imagery.

The ability of GANs to churn out endless images is something else entirely, and it has triggered a variety of responses from artists. Some, like Klingemann, have co-opted this capacity for production, making it a central part of their work. Others try to mediate, selecting images from the flow.

The first major auction of an AI artwork, for example, was a GAN-generated portrait of a fictional aristocrat, Edmond de Belamy, which was selected by human curators, printed out, and stuck in a gold frame to mimic the settings of its training data. The portrait was auctioned at Christie’s last October, and it fetched an unprecedented sum of $432,500, which was more than 40 times the estimate. The work was controversial — not among the gatekeepers of the art world, but AI artists themselves. Many noted that the portrait lacked originality. Its creators, a French collective named Obvious, borrowed much of the code used to create the picture and ran a successful press campaign for the auction, hyping it with mottos like “creativity is not only for humans.” Some scoffed that Obvious had simply printed out the artwork, suggesting it was a crude way to interpret that output of a GAN.

Chris Peters, a former software engineer and AI artist, says this is a “horrible” way to approach the medium. “Where’s the humanity?” he asks. But, like Obvious, Peters believes in curating images from GANs.

In Peters’ own work, he selects pictures from a GAN trained on 19th century landscapes and then paints them himself. Speaking to The Verge by email, he says this is the best way to honor the original artists who created the work used to train the AI, and it gives him the time he needs to better appreciate the images.


Image by Chris Peters
One of Peters’ GAN-generated landscapes (left) next to the painted reproduction (right).

“I learned in art school it can take hours and hours of careful observation before your mind quiets down to the point you can really see and understand something,” Peters says. “I wanted to get inside the AI’s head, to achieve some understanding of what it was trying to do. I was able to, but only after days and days of looking at them while painting them.”

He adds: “If I just printed out the image, I would not understand 1/100th of what is there compared to standing for hours and hours and days and days painting.”

Other artists have combined the two approaches: the speed of the network and the patience of the painter. One artist, Robbie Barrat, recently collaborated with a painter named Ronan Barrot (the similarity in their names is pure coincidence), training a GAN on the latter’s many paintings of skulls. The GAN outputs were displayed alongside the original paintings, but Barrat also found a way to take advantage of AI’s infinite output. He created a peepshow box, which only one person can look in at a time. They press a button and it generates a new image of a skull.

“It will display it for like five seconds and then it will add an input vector to the ‘do not use list,’” Barrat told art news site Artnome. “So basically you are going to be the only person to ever see that skull… ever.”

https://platform.twitter.com/widgets.js

The same is true of Klingemann’s Memories of Passersby I — if you were alone in a room with it, at least. While looking at those screens, you would likely see an image that would never exist again. It’s a casual brush with infinity, like the fact that every time you shuffle a deck of cards, you’re creating a configuration that’s probably never existed in history before.

When it comes to AI, the feeling of infinite productivity resonates strongly with the technology’s cultural history. World myths are full of machines that reveal the hubris of humans by simply working without end. There are the unstoppable mops of Disney’s Fantasia (borrowed from the 18th century German poem “The Sorcerer’s Apprentice”), the golem of Jewish folklore, and even the contemporary field of AI safety research has the fable of the Paperclip Maximizer: the clever but dumb AI that’s told to make paperclips but ends up consuming all of the world’s resources to do so.

For Klingemann, finding a way to incorporate this aspect of GANs into his artwork is at least something he’s happy to keep exploring. “I’m not saying I won’t pursue other paths later, but this is closer to showing the potential that these machines have,” he says. To him, the endless parade of portraits in Memories of Passersby I better captures the feeling of unceasing production from AI — “the almost overwhelming output [that] will never stop.”


Image: Sotheby’s
Time-lapse footage of Memories of Passersby I.