Here is a project I have been working on to solve the task of image colorization using deep learning.
Generator complexity factor set to 72, discriminator to 16, adversarial factor 0.1, generated this:
I trained on a data set of landscapes, taken from Kaggle (Landscape Pictures) and I got results I was satisfied with. In the image showed above, we have a grid of 6x3 images. To the left, the 3x3 grid of images to the left, we have gray scaled images taken from the internet that are not part of the training set. And to the right, the 3x3 grid of images to the right, we have the corresponding output from a model I trained with the code in this repo.
I learned the hard way that just training the generator network, the UNet model, on optimization for some kind of norm of the difference between original and generated examples will likely create a model that produces mono-colored images. The GAN effect was needed because it optimizes for the resemblance of the generator color predictions with the original colors in the training set, which mean that the color intensity of the output will resemble that of images it was trained on.
I did this as part of a course at the university, so I wrote a technical report on it here: Colorization Project PDF