Figure.
1st row: Original image
; 2nd row: Noised image
; 3rd & 4th row: 2 different denoising autoencoder's outputs
;
Just like a traditional autoencoder, even here there are two parts i.e., an
encoder
and adecoder
.
Given an input set of noisy image data and a target set which is non-noisy, the encoder can learn to filter-out important information from the noisy image and the decoder can learn to produce a non-noisy reconstruction!
After the autoencoder is trained, it should be able to de-noise the new test data (few such test results can be seen in the figure above)
This project uses the MNIST Dataset for training the autoencoders. But before starting the training procedure, random noise is added to the input data. This way the autoencoder will have noisy images as input and original and clean images as target outputs.
Following two autoencoder architectures were trained:
- One with
Nearest Neighbor Interpolation with Convolution layers
in its decoder and,- the other with
Transpose Convolution layers
in its decoder.