This is a short project to highlight how easy it is to create an Machine Learning model that amplifies bias. I also make an attempt to mitigate said bias with a disparate impact removal algorithm provided by the AI Fairness 360 toolkit.
I have written a short Medium article associated with the project, in which I explain the goals of this exercise as well as explore the meaning of its results.
Read the article here.
Thanks!