Skip to content
Alice Loukianova edited this page Apr 27, 2020 · 7 revisions

Motivation

Facial recognition is everywhere in modern technology, whether we are aware of it or not. It has been used to solve crimes, to track down missing children, and more. FR has become an efficient tool that we rely on for our population’s well-being. Because of this, the bias present in the technology has only followed in its growth in popularity, creating a dire need for resolution. If this bias continues to be present, FR technology will never be fully accurate and will always have flaws in its identification process. We must tackle two major issues to solve this issue: one being the lack of a publicly available dataset that is unbiased for companies to use in their software. Another is that each and every form of FR is unique, meaning there needs to be one form created so that there are no varying flaws from software to software.

Uses

Facial recognition (FR) technology is an advanced form of biometric security that involves assessing one’s face and comparing it to a known database to form an identity. This software has become increasingly popular in recent years, as the need for more complex security measures has become essential. While this concept has been around since the mid-1900s, it has only become so popular in recent years. However, with this intricate technology comes complicated issues. A major issue that has remained prevalent in the technology is a bias towards certain users because of their demographic. We will demonstrate this bias, and look at ways to eliminate it to create a nondiscriminatory evaluation available for all FR users. It is unfair some users are subject to experience more errors than others, due to their diverse backgrounds.

Goals

We work to reveal the bias present in the current FR technology, and create a means of eliminating the bias in the form of a fairness tool, our dashboard. We have utilized a Balanced Faces in the Wild (BFW) database that equally represents each ethnicity and gender, so the faces being looked at for identification will have no bias in itself. This database is key to an equal background, to begin with, as we represent ethnicities through four subgroups, and genders through male and female. The four subgroups regarding ethnicity are: Asian, black, Indian, and white. These subgroups are used to display how the bias differs between each combination of black males, black females, Asian males, and so forth.

Gender and Race Database Statistics: Statistics of the Balanced Faces in the Wild (BFW) [1] database, grouped here by subgroup and a specific value. There are a million pairs total under analysis, with a constant 30,000 positive pairs being assessed for each gender under said subgroup. Overall, F performs inferior to M for I and W, while M performs inferior to W for A and B.

Clone this wiki locally