Generating images in different styles with neural networks
Neural networks have been getting deeper and deeper, leading to a range of techniques to try to understand how they work. I wrote about some experiments with one of these, Deep Dream; this post will introduce another called Neural Style, introduced by Leon Gatys, Alexander Ecker, and Matthias Bethge of Bethge Lab.
TL;DR: Neural Style allows us to apply the style of one image to another. Check out some examples below.
By extracting information from the representations of an image constructed by a neural network, Neural Style can generate an image that contains the content of one in the style of another. How does it do this? Neural networks work by successively transforming their input data; by extracting the data as it passes from layer to layer we can construct a hierarchical representation of the input image. This representation encodes information about the input image, including vector representations of the texture and content of the image. For a more technical explanation of how the Neural Style algorithm works, check out my slides or the original paper.
Input image:
Guernica, Picasso:
Generating our picture of Swarthmore in the style of Guernica:
Neural Style starts with random noise and then slowly tweaks the image until it matches the style and content of the input images as closely as it can (by minimizing our loss function).
Below you can find the same image of Swarthmore generated in a couple different styles:
Inspired by these results, I decided to try generating new artworks by combining works of existing artists. After a number of experiments, I settled on Keith Haring, an American artist known for his vivid styles. I combined one of his black-and-white works with some of his vivid textural patterns to see what he might’ve produced. The results are below, with the original content image on the left and the styles applied to that image running along the top.
If you find these images interesting, you’ll be glad to know that they’re not too computationally intensive; on Amazon’s g2.2xlarge instances, generating one image takes about 15 minutes.
Other people have also experimented with Neural Style. Below is one of my favorite examples, applying numerous styles to Alice and Wonderland, generated by Gene Kogan:
If you want to run Neural Style yourself, Justin Johnson has put up an implementation in Torch on GitHub.