Skip to content

iancraz/Pix2Pix-Image-Colorizer

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

23 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Pix2Pix-Image-Colorizer

Deep Neural Network to colorize images in black and white colors using the Pix2Pix Architecture. Documentation can be found in TPF Redes Neuronales (in Spanish).

Table of contents

Introduction

Table of contents

The Pix2Pix architecture has proven effective for natural images, and the authors of the original paper claim that it can perform well the problem of image-to-image translation. However, synthetic images may present a challenging use scenario.

In this work, we use the Pix2Pix architecture for a colorization of art pictures in black and white.

Dataset

Table of contents

The dataset used in this work is the Art Images: Drawing/Painting/Sculptures/Engravings. provided in Kaggle. The dataset consist of images of artworks in different forms, such as Drawings, Sculptures, etc. However, in this work we used only the Draweings part of the dataset discarding the rest.

Here you can see some examples of this dataset:

alt text

alt text

However, this dataset is already included in this repository, credits to Danil.

Quick Reference

Table of contents

To run this example, you need to clone this repository:

git clone https://github.com/iancraz/Pix2Pix-Image-Colorizer.git

After you have cloned this repository you need to create the directories ./Dataset/train/ and ./checkpoints/, once you created this directories you can download this pretrained model so as not to have to retrain the model. The link to the checkpoint is:

Checkpoint

Checkpoint Index

Once you have downloaded this checkpoints you must save them in the ./checkpoints/ folder.

Now you are ready to test this model, you must run all the Jupyter Notebook (Remember to update the paths in the file so that it matches your computer), and the model should run smoothly.

If you don't want to reatrain your model and prefer to use the pretrained one you MUST NOT run the cell:

train(train_dataset, 100)

You can test the model with the function:

generate_images(model, test_input, tar, save_filename=False, display_imgs=True)

Training

Table of contents

The training was done in 25 epochs whit 32 long batchsizes, the results are shown as follows:

Input Image:

alt text

Target Image:

alt text

Epoch 1:

Epoch 5:

Epoch 10:

Epoch 15:

Epoch 20:

Epoch 25:

If you prefer you can watch the complete epochs gif as follows:

Contact

Table of contents

Please do not hesitate to reach out to me if you find any issue with the code or if you have any questions.

License

Table of contents

MIT License

Copyright (c) 2021 Ian Cruz Diaz

Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:

The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.

THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.