Skip to content

Trained a generative Adverserial Network (GAN) which when given the satellite image of a place as input, outputs the Map image of that same location. It was trained using standard adverserial training.

Notifications You must be signed in to change notification settings

omkarnitsureiitb/Satellite_to_Maps_GAN

Repository files navigation

Satellite Images to Map Images using GANs

Trained a Generative Adverserial Network (GAN) which when given the satellite image of a place as input, outputs the Map image of that same location. It was trained using standard adverserial training.

Author - Omkar Nitsure

Email - [email protected]
github profile - https://github.com/omkarnitsureiitb

Dataset preparation and preprocessing

The image dataset that I used for training and testing can be found here. Each image in the dataset has merged Satellite and GMap image. So firstly, I reshaped each of the image in the shape (256, 512) and then split that image into 2 parts each of size (256, 256). One of these 2 images is the input to the GAN, whereas the other is the target output of the GAN. I have used 90% of the total data for training as GAN is data hungry and more data significantly helps GAN to learn the desired output image distribution. I stored the images as numpy arrays which allow us for fast retrieval. You can download x_test and y_test. Also, the code for dataset extraction is here and the code that converts images into numpy arrays and stores them in files can be found here.

Model Architecture

GAN model consists of 2 main building blocks namely, Generator and Discriminator which have exactly opposite motivations. Generator seeks to generate fake images which the discriminator cannot distinguish while the discriminator wants to correctly detect the fake images. This is the essence of Adverserial Training.

Generator

Generator is made of an Encoder-Decoder arrangement. Both encoder and decoder have either of Convolutional or Transposed Convolutional Layers as part of their architecture. You can see the source code of Generator which includes both encoder and decoder code to get an idea about their architectures and how they are packed together to build the Generator building block.

Discriminator

Discriminator architecture consits of many units of Convolutional Layer, BatchNormalization with Leaky ReLU stacked up one after the other. You can look at the source code of discriminator to get better understanding of the exact architecture. Now these Generator and Discriminator blocks can be packed together to make the GAN.

Results

The quality of generated image improves significantly with more and more epochs. The model was trained on 54 batches each for 100 epochs, thus making 5400 training steps. You can see that the model is powerful in generating close to real images with just 100 epochs. The results can be seen as follows -

Source Image example 1

Source Satellite image

Target Map Image

Generated Map Image

Source Image example 2

Source Satellite image

Target Map Image

Generated Map Image

About

Trained a generative Adverserial Network (GAN) which when given the satellite image of a place as input, outputs the Map image of that same location. It was trained using standard adverserial training.

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages