Elena Ponte
What Is Algorithmic Bias?
Updated: Jun 8, 2021
You may have heard the term “algorithmic bias.” The expression hit mainstream news outlets again recently when Twitter scrapped its image cropping algorithm—turns out “saliency algorithms” don’t live up to their name when they favor white people over black people. The same week, Google joined Twitter in the spotlight of shame after its new dermatology app, which claimed to recognize 288 different skin conditions, was found to be unreliable for people with darker skin tones. The app’s algorithm was trained and tested on a dataset that vastly underrepresented these users.

This is not news. But unfortunately, neither is the root of the problem: We trust computers too implicitly. In a 2016 Georgia Institute of Technology experiment that simulated emergencies, every participant chose to follow a robot along an unknown path through a hidden door instead of escaping via the marked exit through which they’d entered. Why? They thought, as many do, that algorithms are objective and True—with a capital T. They apotheosized computers as the beacons of rationality and mathematical order they wanted and needed them to be.
But the Truth is that, largely, algorithms are biased. Algorithms don’t make fairness; they automate the status quo. They are trained with historical data and represent past social realities (or, at best, current realities). They can codify racism, sexism, and bigotry. Often, the determining factor is the dataset on which they’re trained.
Defining Algorithmic Bias
Algorithmic bias is when an algorithm exhibits a prejudice or inclination for one outcome over another. This bias can be based on how, by whom, and with what algorithms are trained.
Humans have prejudices and biases. We are raised in different social and economic realities, and our ideologies and beliefs reflect as much. We are not rational and objective, despite our best efforts. We are, in the end, human. And we are the ones harvesting, collecting, choosing, and labeling the data that train algorithms.
The Takeaway: Biased datasets result in biased algorithms. A dataset that overrepresents one group and underrepresented another will produce a biased algorithm.
Why is Algorithmic Bias Dangerous?
Algorithms are everywhere, all the time. AIs are sorting and scoring us in a system without appeals. Job applications, insurance, loans, mortgages: algorithms sort the winners and the losers.
The pervasive use of data-driven algorithms has complex social implications. The best tool for change—for the algorithm and us—is information. So, learn more about the way that algorithmic bias has infiltrated every aspect of our lives. Watch the documentary Coded Bias or read Cathy O’Neil’s 2016 novel Weapons of Math Destruction.
Algorithmic Bias Examples
It would be impossible to speak to every example of algorithmic bias. Instead, we’ve gathered a few that best highlight two of its most glaring realities. The first: algorithmic bias is a pervasive problem across all industries and affects us every day. The second: having a better understanding of the data on which an algorithm is trained goes a long way toward mitigating those problems.
Racial Facial Recognition: The National Institute of Standards and Technology tested 189 facial recognition algorithms for a 2019 federal report. This represented most commercial developers of facial recognition technology at the time and included systems from Microsoft and biometric companies like Cognitec. The Institute found that these systems falsely identified African-American and Asian faces ten to one hundred times more than Caucasian faces. In another study, M.I.T.’s Media Lab found that Amazon’s Rekognition—yes, that one—had great difficulty identifying the gender of female faces and darker-skinned faces in photos. One of the problems? You guessed it: these algorithms train with databases that don’t equally represent all races, ethnicities, genders, and ages.
Evidence-Based Sentencing: When an accused criminal is found guilty, a judge must assign them a “recidivism score” to determine the length of their sentence. However, states increasingly hand this responsibility to automated software that claim to “assess the risk” of defendants and produce profiles and recidivism scores accordingly. These algorithms are lauded as eliminating human bias in sentencing; they appear, sometimes purposefully, race-neutral. In reality, they frequently factor data-backed proxies for race into their decision-making, including socioeconomic and educational background, family criminal history, and ZIP codes. An investigation by ProPublica found that COMPAS, a proprietary risk assessment tool, was more than twice as likely to erroneously identify black defendants as at ‘high risk for recidivism’ than it was white defendants (43 percent to 23 percent). In addition, black defendants were 45 percent more likely to receive higher risk scores than white defendants, even after controlling for variables such as prior crimes, age, and gender.
Predictive Policing Systems: Police departments are also embracing automated criminal assessment tools. Some not only use risk-assessing algorithms to create “most wanted” lists, but distribute them to officers. These algorithms are not trained with race data, as doing so is illegal. They are, like sentencing software, trained with functionally race-proxy data. What’s more, police use location-trained algorithms to predict where and when crimes are most likely to happen. Yet, the predictions can create feedback loops that function as a sort of algorithmic confirmation bias, where the system finds what it expects to see rather than what is objectively there.

Scary, right?
Synthetic Data to Tackle Algorithmic Bias
Modern computer vision systems require extensive and diverse datasets for training. These large datasets are expensive and challenging to collect, label, and clean, making them prohibitive for most small teams. It is common practice instead to use one of the few openly available datasets, such as ImageNet. Because of how they have historically been collected, these datasets are not culturally and demographically representative. For example, publicly available image datasets exhibit Amero-centric and Euro-centric representation bias that fails to ensure dependable human geo-representation. Public datasets also fail to capture all geographic differences in object recognition datasets.
Statistical learning methods guarantee that training on these biased datasets will result in a biased model. Remember: a model is only as good as the data on which it is trained.
On the other hand, synthetic datasets offer complete control of the distribution and can thus be representatively designed. A synthetic human dataset can be constructed to represent cultural, demographic, and gender diversity equally. A variety of computer vision domains have explored using synthetic datasets to reduce bias. The low cost and ease with which they can be generated allows synthetic data to democratize access to large-scale datasets that capture truly representative geographic, cultural, demographic diversity both in data and annotators in a scalable and efficient manner.
Conclusion
Beyond constant vigilance, the best way to reduce bias in modern computer vision systems is to improve the datasets on which these systems are trained. As more cases of dangerously biased algorithms come to light, big tech companies scramble to use more representative training data and regularly audit their systems for evidence of the opposite. Synthetic data means we can look forward to a future where algorithmic bias can be mitigated, or, at the very least, understood.
If we can be a resource in your effort to educate yourself on or otherwise combat algorithmic bias, please reach out.