Skip to content

Why Is My AI Racist?

Computers are supposed to be objective, existing in the binary of 1s and 0s. Computers don’t just go all willy-nilly whenever they want, so what happens when these seemingly impartial machines start exhibiting biased behaviors, even racism? Why do computers (and, more specifically, Artificial Intelligence, or AI systems) go seemingly rogue? The answer goes back to human error.

Humans create systems, and as such, human bias is evident in their design and execution. This bias is immediately apparent when examining who AI discriminates against, often targeting marginalized communities. For instance, facial recognition technology notoriously struggles to identify black faces accurately. These biases in AI systems reflect the prejudices and systemic inequalities present in our society. So, what happens when we knowingly implement systems that we are aware are biased? To address this, we must understand how AI is created and how human biases seep into these systems.

How is AI even created?

First, there isn’t a one-size-fits-all answer. There are many methods to creating systems that could be called artificial intelligence. However, one thing remains pretty constant during the training process — AI is trained on data that contains countless examples of its task. 

How does this work? We’ll use the TSA as an example. We know how this works — you place your coat, shoes, and bag on the belt and then walk through the metal detector. Whether your laptop or tablet goes on the belt depends on how the stars align that day. Anyway, as you step through the metal detector, your items go through the baggage screening process. Traditionally, this is done by the watchful eyes of a TSA agent. Recently, they have added an extra layer of protection to human eyes. And that is AI. 

The AI is trained to detect prohibited items by being shown countless examples of x-rays with and without said banned items. After many examples, the AI picks up on patterns – remember, it doesn’t know what a knife is; it just knows that the amalgamation of certain curves and lines is typically a positive hit in its training. And so, after its training phase, it becomes pretty good at recognizing prohibited items. 

You may be thinking — this sounds good. And yes, AI can absolutely be good. However, it is “naive” in the sense that it will learn in situations where there is nothing to learn, and worse, it will learn objectively harmful things. 

The result? Algorithmic bias occurs when an algorithm systemically makes decisions that consistently favor or disfavor groups of people.

Okay…so why is my AI racist?

There are many reasons, so let’s discuss the top offenders.

1. Training on Biased Data

The algorithm will learn harmful and incorrect opinions when trained on bad or biased data. For example, say you hypothetically trained a Large Language Model (such as ChatGPT) on data from uncensored forums that spew racism. Consequently, the model would echo harmful sentiments when asked about marginalized groups. A disturbing instance of the consequences of training on biased data occurred in 2017. An algorithm for a no-touch soap dispenser was found to be less functional for darker skin tones than for lighter skin tones. This bias resulted from the training data not adequately representing diverse skin tones, leading to a product that failed to serve all users equitably.=

2. Insufficient Training on Specific Subgroups

An algorithm needs to be sufficiently trained on data from specific subgroups so its accuracy in identifying or responding to those subgroups will be higher than others. A significant example of this issue involves a group of tech giants. In 2019, the National Institute of Standards and Technology (NIST) published a damning study highlighting a major source of bias. The study found that tech giants had developed facial recognition software that was overwhelmingly accurate at identifying middle-aged white male faces but significantly less accurate when identifying any other demographic. This discrepancy occurs due to a skewed training sample, where middle-aged white men were vastly over-represented.

3. Lack of Diversity in Tech, especially in AI

The tech industry, and particularly the AI sector, needs more diversity. This is problematic because, without diverse voices in the room, there’s no one to question whether the AI being developed is fair and non-discriminatory towards marginalized groups. When certain perspectives are missing, it’s easy to overlook or misunderstand the needs and experiences of those groups. To illustrate this further, consider how algorithms are trained. They rely on vast amounts of data, but if this data doesn’t accurately represent all groups, the final algorithm will perform poorly for underrepresented groups. This can lead to systemic biases as these flawed algorithms are increasingly used in various sectors.

The Impact of AI Bias

The widespread use of AI has real-world consequences. For instance, AI is often used in facial recognition systems, which are disproportionately biased. This bias can lead to wrongful arrests, as seen in the case of Randal Quran Reid, who was pulled over in Atlanta because AI had identified him as the suspect in a series of stolen credit card purchases. Although it is known that police departments use this software, it’s not clear how many use it, as noted by a Pew Research Study. This is problematic for endless reasons, but the predominant one is that it often misidentifies people like Randal and leads to wrongful arrests. 

Understanding these issues is crucial for creating fair and equitable AI systems. Only by acknowledging and addressing the biases in our data and the lack of diversity in our development teams can we hope to build technology that serves everyone equally.