7/21

Alexander Mordvintsev, “deepdream.c,” 2015/2021

The Hello, World! of AI art

Kanon
Published in
10 min readMay 18, 2021

--

Alexander Mordvintsev, “deepdream.c,” 2015/2021

View deepdream.c in the K21 gallery here

Flatland tells of a three dimensional sphere raining down and through an infinitesimally thin world. Flatlanders see it as a bulging circle, appearing suddenly as a dot and then vanishing into nothingness, but how does the sphere see the geometric critters on the plane? If higher dimensional objects appear more simplistically from lower dimensional perspectives, what do simple creatures look like from above and beyond? What might the gods see when they look at us? Might they see more than we see of ourselves?

Alexander Mordvintsev, then a newly installed engineer at Google, was awakened by a startling dream in the middle of the night on 18 May 2015. Unable to sleep, he felt compelled to carry on with a software program he had been tinkering with for months. That night, when he finally cracked it, he shared the results on a company social network and went back to bed. The 60 or so comments he woke up to were just the beginning of what would become a viral phenomenon. Fellow Googlers quickly realized what he had done, soon the rest of the world would follow.

Mordvintsev’s original 2am post on the internal Google+ social network on May 18, 2015. Courtesy of Alexander Mordvintsev.

In just 30 lines of code, Mordvintsev reverse engineered a convolutional neural net, a machine learning algorithm designed to perform image recognition. If this AI had a mind, Mordvinstev had learned to read it by turning it inside out. Mordvintsev’s patch converted an image recognition system into an image generation system, one that could be used by artists.

Mordvintsev’s 2015 take on Van Gogh’s Starry Night. Courtesy of the artist.

In 2015, these webs of machine intelligence were just being woven into the corners of digital infrastructure and our everyday lives. Mordvintsev’s experiment animated and visualized them. Against a steady drumbeat of Skynet scares and singularity mongering, Mordvintsev revealed the perspective of a new Big Other. He didn’t so much open Roko’s eyes as give us a means of peering through them.

What took months to complete was not writing the code itself but tweaking its parameters. Like a biologist fiddling with the knobs of a novel microscope, Mordvintsev slowly brought something meaningful into focus, both engineering and discovering a machine aesthetic. This new kind of beauty was not simply a human projection but a co-creation of carbon- and silicon-based intelligence.

Later dubbed “DeepDream,” the discovery spurred by his own dream quickly went viral. First Google colleagues, then artist-engineer hybrids around the world began churning out a flurry of images in concert with neural nets. Rock bands featured them on rejiggered album covers and in music videos. Apps were written that let anyone initiate their own computational hallucination. The Internet was soon awash in what Steven Levy called a “Boschian bestiary of visions.”

“DeepDream: The art of neural networks,” a 2016 exhibition at Gray Area, San Francisco, in collaboration with the Google Artists + Machine Intelligence group, featuring work created using Mordvintsev’s discovery. Courtesy of Google AMI.

Six years later, on the anniversary of this invention, Mordvintsev has minted his first DeepDream NFT: a remastered version of the 2015 archetype.

The original implementation stitched together a number of then-budding libraries and services. Reanimating them would require unwinding years of updates. Instead, Mordvintsev rewrote the entire process from scratch, digesting the external dependencies into a new monolithic program that comes with the NFT: deepdream.c.

The comments throughout deepdream.c annotate the thoughts that coursed through Mordvintsev’s mind while he detailed and archived the process of seeing into the machine’s mind.

Mordvintsev has referred to it as “steampunk DeepDream.” To make a future-proof version, he had to reach back in time. Stripping his software of the luxuries of modern programming, he coded it according to a standard contemporaneous with the Soviet Union he was born into. Meticulously and mercilessly refined with an esoteric style meant to endure, his code is a canvas. The original 30 lines have ballooned to over 1000, digesting the various external dependencies into an efficient minimalist single-threaded algorithm for future beginners.

If his initial experiment was the inception of the now thriving scene of AI-generated art, deepdream.c is its retroactively created “Hello, World!” program: the post-facto entry point engineered to last 100 years. We discussed this ambition with Mordvintsev in the interview below.

How did you end up at Google?

I got an email out of the blue in 2014 from a Google recruiter who found my name in a list of Russian high school students that had won a national contest about ten years earlier. I found it a weird strategy, but took the job as a software engineer. I already had some former classmates working in Zurich and wanted to stay close to family, so I chose the Swiss office. It was a product-focused outpost but I was able to land in a small team doing computer vision. Prior to Google, I worked in St. Petersburg as a computer vision expert without any training in computer vision for a company that didn’t really do computer vision. That meant I ended up learning on the job, rapidly making a lot of prototypes.

Were you already an expert in AI?

The first time I encountered neural nets was in the early 2000s as a high school student. While studying computer science in university, I was told they were conceptually interesting but didn’t work. At Google, on the Safe Search team, I was working with neural nets that did. I thought I knew something about computer vision and what computers could and couldn’t do. Those things surpassed my expectations. I didn’t expect them to be that good. I spent the first few months just catching up.

What was the genesis of DeepDream?

At Google, we are encouraged to use twenty percent of our time–one day a week–to work on a personally motivated project. I spent that time trying to reverse engineer image-trained neural networks. I wanted to visualize what was happening inside the black box. By early 2015, I had some nice visual results but it wasn’t until late in the night on May 18 that I finally got to running the main experiment I had been planning. I shared the results on the internal Google+ social network and soon after it went viral. I woke up the next morning to a hundred or so comments, one quoting Lovecraft and saying I had woken up the dark forces. My manager said it was going to make a nice Scientific American cover. It clearly triggered the artistic side of my colleagues.

2015 image of a custard apple, seen through Mordvintsev’s process. Courtesy of the artist.

When did you begin making art?

I was always fascinated by generative art and the demo scene. Typical nerd things, like the Game of Life or Escheresque illusions. The first time I made my own was around 2009. A couple days before the submission deadline for a conference I was going to attend in Helsinki, I decided to hack something together myself. It was just a single-effect hydrodynamic simulation combined algorithmically with a Van Gogh painting to Creative Commons classical music. It wasn’t artistically super interesting or technically challenging, but the jury selected it to play on the big screen for five thousand people because it was so different from what everyone else was doing, because it was irrelevant to the current trend. This was my first artwork, but I didn’t make much art or think of myself as an artist until after DeepDream. It wasn’t until a couple years later, in 2017, at my wife’s suggestion, that I finally added “artist” to my research scientist title.

What’s the difference between art and research science?

I think it was Picasso who said, “I never do works of art, it’s all research.” I think that’s too simplistic. Research scientists have to cite their sources and explain their tricks. They have to take the time to show that the thing they’re doing is sufficiently similar to what others have already done so that it continues the line of existing work. There are many examples of scientific ideas so ahead of their time that they take decades to acknowledge, so establishing continuity is critical to acceptance. With my art, I haven’t had to prove anything or explain everything or cite my sources.

2018 example of the AI art that Mordvintsev calls “origami.” More can be found here.

What was the motivation for deepdream.c?

You asked me to reproduce the original DeepDream image for the K21 Collection. I tried to run my old code on an old machine, but the various updates over the last six years had broken the dependencies. It was very easy for me to recreate the effect on a modern software stack, but that didn’t feel right. K21 has an ambition of creating “art for the next 100 years.” If it’s already non-trivial to go back six years, there’s no way the original DeepDream code will be accessible in 100 years. So I set about creating code that would have a chance of being executable far in the future.

Why did you choose the C code language?

Of all the many computers we already know, from transistor-based machines to our brains to molecular interactions, these are still just a small fraction of the computers possible in our universe. I’ve often wondered, what would an alien computer look like? What if their circuitry was very slow but could be organized into massive parallel processing machines? What if it was very difficult to synchronize signals across such vast arrays? If it were feasible to design a DeepDream schematic for low level hardware and create a custom deepdream CPU–DeepDream on a chip–I would, but I wanted something that is simple and easy to reproduce for others here on Earth. So instead I asked, what are the technologies that are in widespread and active use today that were developed decades ago? The answer is C.

How did you design it?

A code base I expect to survive deep into the future is Linux. Linus Torvalds, its founder, used the C89 standard of the C language. I have written a lot in the more modern C++ language–it has issues, but there are some very smart computer vision use cases that I appreciate–so I didn’t understand why Torvalds was so stubborn in not allowing it or even the more recent C99 in the kernel–the heart–of Linux. He also specified some more esoteric rules, like using eight spaces for tabulation or only allowing for three layers or nesting, that all sounded quite strange to me. Yet he wrote both the Linux kernel and the git versioning protocol single-handedly, so perhaps he’s someone worth listening to. So I did.

You’ve called this the “steampunk DeepDream.” Why?

I hate when software is flawed. I hate unnecessary complexity, unnecessary dependencies. I hate when software gets slower at least as quickly as computers get faster. There is an online course called “From NAND to Tetris,” which teaches students to build a game of Tetris solely out of NAND gates. A gate is one of the basic building blocks of all computers. There are a number of them, but they can all be created by compositions of exclusively NAND gates. The course begins by compiling NAND gates into adders and subtractors, then logic units, then loops that form registers, memory, and so on until you build a CPU that can be programmed to run Tetris. I find it quite beautiful in it’s wholeness and that a single person can understand it all. I built deepdream.c with the same ambition in mind.

Hello, World! Is another reference you cite.

This is the most basic program there is. It’s designed for beginners, so it has to be simple and clean. You run it to make sure your system is properly installed. All it does is print out “Hello, World!” I wanted deepdream.c to be this basic. I only used static variables and function pointers, fixed memory allocation, a limited image buffer–all very inconvenient restrictions and the opposite of modern programming styles because I wanted the code to be understandable and direct. I noticed that for some compilers–the programs that convert your code into the 1s and 0s the machine reads–I had to include some additional flags to force it to include the standard math library. This is available on every computer with any compiler, but I wanted it to compile as easily as Hello, World!, so I even incorporated the necessary math functions into the code itself, while ensuring it would compile in less than a second. To run it, just type gcc deepdream.c. Nothing else.

Is the minimalism of deepdream.c just an aesthetic choice?

If I was trapped with just a c compiler, it would take me a few weeks to hack the tools I need with my own hands to begin building complex applications. Tools make you productive to a certain point, but then they start limiting you. You start designing things that are easy to do with those tools and you discard ideas that are useful and interesting only because they are difficult to implement with the tools at hand. That’s why I wanted to use less tools, to work from first principles. I don’t want deepdream.c to limit creativity.

Now that you’ve remastered the code with as much care as the DeepDream image itself, what do you consider to be the artwork?

Collectors have reached out to me asking for early prints of DeepDream images, but those were just made on an office printer and they are all gone. To me the artwork was actually this tiny piece of code that I uploaded to Github years ago. The code has always been the artwork. Code is my paintbrush.

--

--