Sketch-to-Image Conversion using CGAN
This project focuses on utilizing a Conditional Generative Adversarial Network (CGAN) model to convert sketches into corresponding images. The dataset used for training consists of pairs sourced from CelebHQ on Kaggle, where images from CelebHQ are paired with hand-drawn sketches.
Methodology:
Implementation of a CGAN architecture for conditional image generation.
Training the model to learn the relationship between sketches and their corresponding images in a paired dataset.
Utilization of paired data to facilitate the translation from sketches to realistic images.
Key Components:
CGAN Model: Conditional Generative Adversarial Network used for the translation task.
Paired Dataset: CelebHQ dataset from Kaggle comprising image-sketch pairs.
Training and Evaluation: Model trained on the paired dataset to generate realistic images from sketches.
Conclusion: This project demonstrates the feasibility of generating realistic images from sketches using CGANs, highlighting the ability to translate hand-drawn sketches into recognizable images with the aid of paired datasets.
Note: Access to the CelebHQ dataset from Kaggle was instrumental in creating paired samples for effective training.