Note The license for this package is available in the license.txt file. By running the COVID19_LabelMaskAutomation.mlx script, you will be downloading certain third party content which is licensed under separate license agreements.
Background
Coronavirus disease (COVID-19) is a new strain of disease in humans discovered in 2019 that has never been identified in the past. Coronavirus is a large family of viruses that causes illness in patients ranging from common cold to advanced respiratory syndromes such as Middle East Respiratory Syndrome (MERS-COV) and Severe Acute Respiratory Syndrome (SARS-COV). Many people are currently affected and are being treated across the world causing a global pandemic. Several countries have declared a national emergency and have quarantined millions of people.
To be a part of the worldwide trend, I've created a CORONA mask detection deep learning model.
It includes semi-auto data labeling, model training, and GPU code generation for real-time inference.
Our MathWorks Korea staffs were willing to share their selfies(Non-distributable) with masks while working from home, so I can create the dataset easily.
Unfortunately, the dataset is not distributable, so you need to create your own dataset for training your own model. I've included some of sample data in SampleMaskData folder.
- Label image
- Automated labeling with pretrained model
- Use image Labeler for interactive process automation
- Train Object detection model
- SSD(Single-Shot Multibox Detector)
- YOLOv2(You Only Look Once v2)
- Generate CUDA mex for inference speed acceleration
This file includes the basic of ground truth labeling and how we semi-automate the ground truth labeling with pretrained open source model.
This file includes the entire training process from data augmentation, architecture creation and evaluation. It includes high-level APIs for SSD(Single Shot Multibox Detector) and YOLOv2(You Only Look Once) network architecture for the comparison.
If you complete the training, we need to test the trained model for still image, video and live webcam streaming data. For the each task instances, see below codes for the model running.
- Test trained model for still image.
- Test trained model for existing video.
- Test trained model for live webcam object image. The example requires MATLAB Support Package for USB Webcams. If you do not have the required support packages installed, then the software provides a download link.
In the training code, few lines of code is included for code generation. Prerequisites
- CUDA enabled NVIDIA GPU with compute capability 3.2 or higher.
- NVIDIA CUDA toolkit and driver.
- NVIDIA cuDNN library.
- Environment variables for the compilers and libraries. For information on the supported versions of the compilers and libraries, see Third-party Products (GPU Coder). For setting up the environment variables, see Setting Up the Prerequisite Products (GPU Coder).
- GPU Coder Interface for Deep Learning Libraries support package. To install this support package, use the Add-On Explorer.
Requires
- MATLAB
- Deep Learning Toolbox
- Image Processing Toolbox
- Computer Vision Toolbox
- Parallel Computing Toolbox
- MATLAB Coder
- GPU Coder
Support Packages
- Deep Learning Toolbox Importer for Caffe Models
- MATLAB Support Package for USB Webcams
- GPU Coder Interface for Deep Learning Libraries
Note that this demo is developed based on Windows operating system, and few minor issues are expected with other OS.
Download a free MATLAB trial for Deep Learning
View Webinar for the entire model development (Korean)
Copyright 2020 The MathWorks, Inc.