This project demonstrates how to perform continuous delivery for a Python-based machine learning application using the Flask web framework.
We use a sklearn model, pre-trained to predict housing prices in Boston according to several features, such as average rooms in a home and data about highway access, teacher-to-pupil ratios, and more.
The project uses a flask app, app.py
that serves out predictions (inference) about housing prices through API calls.
Go to the Azure Console and launch a bash shell:
- Clone the Repo:
git clone git@github.com:mhaywardhill/sklearn-flask-webapp.git
- Set up the Python virtual environment:
python3 -m venv ~/.Dev-Ops
source ~/.Dev-Ops/bin/activate
- Run
make all
- Create an app service and deploy the app in the Cloud Shell:
az webapp up -n <your-appservice> --sku FREE
- Verify the deployed application works by browsing to the deployed url:
Go to https://<your-appservice>.azurewebsites.net/
and you should see the same output as in the screenshot below:
Change the line in make_predict_azure_app.sh to match the deployed prediction:
-X POST https://<yourappname>.azurewebsites.net:$PORT/predict
./make_predict_azure_app.sh
A successful prediction will look like this:
Next, we'll need to create an Azure DevOps project and pipeline. Please refer to the offical documentation for more detail.
Below is a screenshot of the performance testing using Locust.
Both CI and CD streams run in parallel; this not ideal as we should only deploy the application if it passes the tests, in GitHub Actions.