Skip to content

Non - trival, binary image classification: CNN modeling x - ray image input with deep learning.

License

Notifications You must be signed in to change notification settings

jammy-bot/pneumonia-x-ray-deep-learning-classification

 
 

Repository files navigation

Pneumonia X - ray Deep Learning Classification

"Chest x-ray image of normal lungs"

Chest x-ray image of normal lungs

This project involves building a deep neural network that trains on a large dataset for classification on a non-trivial task. In this case, the task is using x-ray images of patients to classify whether or not they have pneumonia.

The Dataset

The dataset originates from Kermany et al. on Mendeley.

The particular subset used for this project is sourced via Kaggle. The subset contains 5,863 rgb chest x-ray images, in JPEG format. Images are organized into folders for training, validation, and testing, each of which is split into 'NORMAL' AND 'PNEUMONIA' subfolders.

Objectives

My aim for this project was not to build a perfect model; rather the intent was to build a working model and to explore how changes to model architecture and hyperperameters can impact model accuracy. In addition, I wanted to make use of cloud computing (in this case, using Google Colab's free GPU and browser - based Jupyter Notebook) and suggest methods for distributing working models to end users.

Data Preparation

"Bar plot of test data counts per class"

I acquired project data from Kaggle via API, in a compressed (Zip) format, and read them into directories created within Colab's temporary environment. I then standardized, reshaped, and inspected the image data. Prepared image data were fed into the model, representing 150 x 150 pixels in 3 channels (rgb).

Modeling

Using the Keras library, I designed a sequentially layered convolutional neural network with Max Pooling and rectified linear activation functions (ReLU). The initial model was compiled with an RMSprop optimizer. It is a relatively fast optimizer, which independently adjusts gradient step - size for model weights.

I instantiated 'callbacks' for the model, which programmatically saved model weights to the model's history whenever they improved over the previous epoch. In addition, I made use of Keras's ReduceLROnPlateau and EarlyStopping modules, set to either reduce the model's learning rate or to stop the model when its loss did not improve over a set number of epochs.

"" "Graphical sequence of model layers"

Graphical sequence of model layers

Since the task was binary classification ('NORMAL' OR 'PNEUMONIA'), the model was designed to output to a sigmoid activation layer.

Evaluation

After building additional models, I identified the most accurate among them for distribution. I test reloading the model and report performance metrics in the project notebook.

The resulting selection is still a baseline model, and I will be interested to return to the project, for additional tuning and/ or refactoring (including building and evaluating a model on a different DL library, eg., Pytorch).

Featured Notebooks/Analysis

Non-Technical Presentation

Technologies

  • framework: Google Colab / Jupyter Notebook
  • languages: Python
  • libraries and modules:
    • Numpy
    • OS
    • Keras:
      • callbacks
      • layers
      • load_model
      • models
      • plot_model
      • optimizers
    • Scikit-Learn:
      • metrics
      • model_selection
    • Time ...
  • visualization libraries
    • Matplotlib

About

Non - trival, binary image classification: CNN modeling x - ray image input with deep learning.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Jupyter Notebook 100.0%