Skip to content

A very simple and plain DC GAN to generate Image Flower Pictures out of the dataset.

Notifications You must be signed in to change notification settings

abhaskumarsinha/DC-GAN

Repository files navigation

DC-GAN

A very simple and plain DC GAN to generate Image Flower Pictures out of the dataset.

Dataset

Only a sample hundred data have been provided, which isn't enough to generate GAN Images. So, kindly full flower dataset and place it in . /flowers directory since: http://chaladze.com/l5/ (Linnaeus, 5 dataset Project). We demonstrate MNIST Fashion Database for the same.

Abstract

DC GAN or Deep Convolutional - GAN is a special case of GAN that uses multiple Convolutional Layers in deep nets in the form of Convolutional layers in Discriminator and Transpose Convolutional Layers in Generators (Basically they are reverse of Classical Convolutional Layers [1]). The basic advantage DC GANs provide over Classical GANs is tackling Mode Collapse during image training [2]. We use tanh, sigmoid, relu, selu activation functions provided by Keras in TensorFlow v2.0. Please Note: The Conv2DTranspose() layer provided in Keras is different from Upsampling2D Layer, as the Conv2DTranspose layers' kernel weights are learnable during Training epochs. We will be using Conv2DTranspose() just for the same reason. For training steps, kindly refer, 1D-GAN/Classic GAN repo for more details. We implement similar training steps here, except for the case, we train images by batch, model.train_on_batch() from Keras, which is an alternate and much simpler way to implement GANs without manually training weights with gradient_tape() as in Official Keras Documentation.

Results

Results on MNIST Fashion Dataset, 15 min run on Ryzen 5 CPU. Channel Size = 1 (Black and White) (Total Samples = 60,000, each 28 x 28 x 1)

DC.GAN.mp4

Results on Linnaeus 5 Flower Dataset, Training time = 1 hour 17 min on Google Colab GPU (Total samples = 1,600, each 64 x 64 x 3)

GAN_1 GAN_2 GAN_3 GAN_4 GAN_5 download (3) download (4) download (5) download (6) download (1) download (2)

As we can see from the initial results, the GAN Model started recognizing flowers in its initial stage itself. The shades and boundaries were visible from start itself. The colors started converging to smooth and better gradually and it started converging into blurred flower visuals.

For better results, train for 10-15 hours on Google Colab GPU for good convergence.

Bibliography

  1. Gao, Hongyang, et al. "Pixel transposed convolutional networks." IEEE transactions on pattern analysis and machine intelligence 42.5 (2019): 1218-1227.
  2. Radford, Alec, Luke Metz, and Soumith Chintala. "Unsupervised representation learning with deep convolutional generative adversarial networks." arXiv preprint arXiv:1511.06434 (2015).
  3. I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio, “Generative adversarial nets,” in Advances in Neural Information Processing Systems, 2014, pp. 2672–2680
  4. Y. LeCun, Y. Bengio, and G. Hinton, “Deep learning,” Nature, vol. 521, no. 7553, pp. 436–444, 2015.
  5. Thanh-Tung, Hoang, and Truyen Tran. "Catastrophic forgetting and mode collapse in gans." 2020 international joint conference on neural networks (ijcnn). IEEE, 2020.
  6. Li, C.; Xu, K.; Zhu, J.; Zhang, B. Triple Generative Adversarial Nets. arXiv, 2016; arXiv:1703.02291.

Releases

No releases published

Packages

No packages published