This project demonstrates a comparison of training times for a ResNet50 model on the CIFAR-10 dataset using TensorFlow, running on both CPU and GPU.
This demo is part of a learning module designed to provide hands-on experience in comparing computational performance on different hardware setups. Specifically, the goals are to:
- Train a ResNet50 model on the CIFAR-10 dataset.
- Compare the training times on a CPU and a GPU.
- Visualize the training time difference using a bar chart.
- Gain practical insights into optimizing machine learning workflows for various hardware configurations, a critical aspect of modern AI and ML development.
- Programming Language: Python
- Deep Learning Framework: TensorFlow
- Architecture: ResNet50
- Visualization: Matplotlib
- Dataset: CIFAR-10
- Hardware: NVIDIA GPU (CUDA-enabled)
- Python 3.8 or later
- TensorFlow 2.0 or later
- Matplotlib
- NVIDIA GPU with CUDA support (if using a GPU)
-
Clone the repository:
https://github.com/MohanKrishnaGR/cpu-vs-gpu-ResNet50-training-cifar10.git
-
Navigate to the project directory:
cd cpu-vs-gpu-ResNet50-training-cifar10
-
Install required Python libraries:
pip install 'tensorflow[and-cuda]' matplotlib
-
Check GPU availability:
!nvidia-smi
-
Execute the Python script:
python train_resnet_comparison.py
-
The script will:
- Train the ResNet-like model on the CIFAR-10 dataset using both CPU and GPU.
- Record the training time for each device.
- Display a bar chart comparing the training times.
Thank you for your interest in this demo! We welcome any feedback. Feel free to reach out to us.