This repository contains a Jupyter notebook. It demonstrates the process of training and evaluating a YOLOv10 model for object detection using the Rock, Paper, Scissors dataset from Roboflow.
Key Achievements:
- Achieved mAP50: 0.945 and mAP50-95: 0.732 on the validation dataset, showcasing the model's high accuracy and robustness.
To run this notebook, you need to have the following libraries installed:
supervision
ultralytics
roboflow
yolov10
You can install these libraries using the following commands:
pip install supervision
pip install ultralytics
pip install roboflow
pip install git+https://github.com/THU-MIG/yolov10.git
- Mount Google Drive: Access your files in Google Drive.
- Install Required Libraries: Install necessary Python libraries (
supervision
,ultralytics
,roboflow
,yolov10
). - Import Libraries: Import essential libraries for data handling, model training, and visualization.
- Set Up Environment: Define the current working directory and prepare the environment.
- Download Model Weights: Fetch the pre-trained YOLOv10 model weights.
- Download Dataset: Download the Rock, Paper, Scissors dataset from Roboflow.
- Train the Model: Train the YOLOv10 model on the dataset.
- Display Training Results: Visualize the training results, including the confusion matrix and training metrics.
- Evaluate the Model: Evaluate the trained model on the validation set.
- Make Predictions: Use the trained model to predict test images and a single image.
- Process Video: Apply the model to a video to detect objects frame by frame and save the results.
Please refer to the Jupyter Notebook in this repository for detailed code and step-by-step instructions.
The model achieved impressive results on the validation dataset:
- mAP50: 0.945
- mAP50-95: 0.732
Below is a visual representation of the model's performance on a sample video:
Contributions are welcome! Please feel free to submit a Pull Request.
This project is licensed under the MIT License. See the LICENSE file for details.