title | author | date | output |
---|---|---|---|
Brain Tumor Predication using ML and DL |
Nirmal |
`r Sys.Date()` |
html_document |
Building a detection model using a convolutional neural network in Tensorflow & Keras.
Used a brain MRI images data founded on Kaggle. You can find it here.
About the data:
The dataset contains 2 folders: yes and no which contains 40 Brain MRI Images. The folder yes contains 20 Brain MRI Images that are tumorous and the folder no contains 20 Brain MRI Images that are non-tumorous.
Note: sometimes viewing IPython notebooks using GitHub viewer doesn't work as expected, so you can always view them using nbviewer.
Why did I use data augmentation?
Since this is a small dataset, There wasn't enough examples to train the neural network. Also, data augmentation was useful in taclking the data imbalance issue in the data.
Further explanations are found in the Data Augmentation notebook (Data Augmentation.ipynb).
Before data augmentation, the dataset consisted of:
20 positive and 20 negative examples, resulting in 40 example images.
After data augmentation, now the dataset consists of:
1000 positive and 814 examples, resulting in 1814 example images.
Note: these 1814 examples contains also the 40 original images. They are found in folder named 'augmented_data'.
For every image, the following preprocessing steps were applied:
- Crop the part of the image that contains only the brain (which is the most important part of the image).
- Resize the image to have a shape of (240, 240, 3)=(image_width, image_height, number of channels): because images in the dataset come in different sizes. So, all images should have the same shape to feed it as an input to the neural network.
- Apply normalization: to scale pixel values to the range 0-1.
The data was split in the following way: 1. 70% of the data for training. 2. 15% of the data for validation. 3. 15% of the data for testing.
This is the architecture that I've built:
Understanding the architecture:
Each input x (image) has a shape of (240, 240, 3) and is fed into the neural network. And, it goes through the following layers:
- A Zero Padding layer with a pool size of (2, 2).
- A convolutional layer with 32 filters, with a filter size of (7, 7) and a stride equal to 1.
- A batch normalization layer to normalize pixel values to speed up computation.
- A ReLU activation layer.
- A Max Pooling layer with f=4 and s=4.
- A Max Pooling layer with f=4 and s=4, same as before.
- A flatten layer in order to flatten the 3-dimensional matrix into a one-dimensional vector.
- A Dense (output unit) fully connected layer with one neuron with a sigmoid activation (since this is a binary classification task).
Why this architecture?
Firstly, I applied transfer learning using a ResNet50 and vgg-16, but these models were too complex to the data size and were overfitting. Of course, you may get good results applying transfer learning with these models using data augmentation. But, I'm using training on a computer with 6th generation Intel i7 CPU and 8 GB memory. So, I had to take into consideration computational complexity and memory limitations.
So why not try a simpler architecture and train it from scratch. And it worked :)
Now, the best model (the one with the best validation accuracy) detects brain tumor with:
81.7% accuracy on the test set.
0.85 f1 score on the test set.
These results are very good considering that the data is balanced.
Performance table of the best model:
What's in the files?
- The code in the IPython notebooks.
- The weights for all the models. The best model is named as 'cnn-parameters-improvement-{epoch:02d}-{val_accuracy:.2f}.keras'.
- The models are stored as .keras files. They can be restored as follows:
from tensorflow.keras.callbacks import ModelCheckpoint, TensorBoard
import time
# Define the directory where you want to save your models
model_dir = "augmented_data/models/"
# Create the filepath string with the updated directory path and `.keras` extension
filepath = model_dir + "cnn-parameters-improvement-{epoch:02d}-{val_accuracy:.2f}.keras"
# Define the ModelCheckpoint callback with the new directory path and `.keras` extension
checkpoint = ModelCheckpoint(filepath=filepath, monitor='val_accuracy', verbose=1, save_best_only=True, mode='max')
- The original data is in the folder named 'yes' and 'no'. And, the augmented data in the folder named 'augmented data'.