Skip to content

LindsayXX/Image-Synthesis-VAE

Repository files navigation

Image Synthesis Using Variational Autoencoders with Adversarial Learning

Reproduction & comparison study

In this project, we analyze the adversarial generator encoder (AGE) and the introspective autoencoder (IntroVAE).

Here is the structure and training flow of 1)VAE, 2)AGE, and 3)IntroVAE:

structure

And the detailed network architecture for encoding images of 128*128 resolution in IntroVAE(top) and AGE(bottom).

arch

We implement the models from scratch and apply them to CIFAR-10 and CelebA image datasets for unconditional image generation and reconstruction tasks. For CIFAR-10, we evaluate AGE quantitatively and our model reaches an Inception score of 2.90. AGE does not converge on the higher resolution dataset CelebA, whereas IntroVAE reaches stable training but suffer from blurriness and sometimes mode collapse.

Requirment

Python 3.6

pytorch>=1.1.0

numpy>=1.14.0

matplotlib>=2.1.2

tqdm>=4.31.1

Run

trian AGE: AGE/age_training.py

train IntroVAE: IntroVAE/introvae_training.py

the vae_training.py can be used for finding hyperparameters

Result

Here are some examples of our results:

cifar10

celebA

recons

Useful link:

original repository of AGE and IntroVAE

Our project report

About

Project for DD2424 Deep Learning in Data Science

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Languages