Geometric Deep Learning
Cycle GANS

Gudmundur Einarsson
Technical University of Denmark

October 17th 2018

cycle GANs, solves im2im translation!

They solve it impressively

Some of this has been done before (2001)

What is different for image to image translation?

  • More general, not just style transfer

  • Create and estimate correspondance between two high dimensional distributions

  • But why is it so big an popular?

  • We do not need matching samples!

  • Opens up possibilities for image synthesis

Non-paired samples

Why not use GANs?

  • We could use a GAN to generate image in other domain and have a discriminator tell if it was generated or not.

  • In theory this could work

  • In practice it doesn’t

  • GANs alone do not guarantee that things pair up in meaningful ways, inifinitely many mappings that achieve the same thing.

  • Very prone to mode collapse, where everything is mapped to the same thing.

Cyclic Consistency

GAN loss

Cyclic loss, forward and backward

How consistent is this?

Full loss

Paper that inspired Generator Architecture

Generator details

  • Good for neural style transfer and super resolution

  • Two stride-2 convolutions

  • Several residual blocks

  • Two fractionally strided convolutions, with stride \(0.5\)

  • 6 blocks for \(128\times 128\), 9 for higher res

  • Use instance normalization

Discriminator details

  • Use \(70 \times 70\) patchGANs

  • Sample overlapping patches from generated and real

  • Scales to larger output automatically

  • Has fewer parameters than a full image discriminator

Mechanical Turk Results

Failure Cases

Main author has another great paper!

  • Image-to-image translation with conditional adversarial networks

  • CVPR 2017

  • More cited than cycle GAN, a bit different idea

condGAN

Other cool recent stuff by these guys

Awesome new thing