Over the past few years, artificial intelligence has become a bona fide buzzword amongst businesses of all sizes, with 97% of respondents to a Forbes survey seeing a potential benefit in some way, shape, or form. However, with it being integrated everywhere in our modern lives, it is important that we remember that AI is still a human invention, as such, it is vulnerable to our own implicit biases.
Let’s explore what AI bias is in a little greater detail, and examine some of the ways it presents itself.
Any Form of Intelligence Can Learn to be Biased
AI is, at its most simple, nothing more than an equation that relies on data.
Granted, this equation is remarkably complex, and the data stores referenced are massive, but artificial intelligence really can be described that simply. This means that the accuracy of this data is crucial to the efficacy of the AI that relies on it.
Unfortunately, it is very easy for this data to be tainted or otherwise skewed by the bias of the people collecting it. Once there’s an issue with the data, the AI model will exacerbate that issue and amplify the incorrect or biased data… and why wouldn’t it? It’s only doing what it is told with the resources it’s been given.
However, if it isn’t corrected, this data can exacerbate existing issues through training data bias.
On a related note, the algorithms themselves can be written to weigh different factors in a dataset so the algorithm comes to certain conclusions. This is what is known as algorithmic bias.
Of course, there’s also the possibility that the person selecting the data to put into the algorithm has their own preconceptions and biases.
These issues include an assortment of unpleasant, and unpleasantly familiar, -isms.
Where Does AI Bias Show Up?
Ableism
Did you know that the World Health Organization estimates that 16% of the world’s population experiences some form of disability in their everyday life? When you consider all the stigma, barriers in healthcare and educational systems, employment challenges, and comorbid physical and mental health conditions that often come with disabilities of all kinds, the misrepresentation that AI presents to this varied population just seems mean-spirited.
From negative stereotypes being perpetuated by image generators to software not understanding speech impediments, there are clear issues that often present themselves… including in how AI-powered platforms present their own results being less than accessible.
Ageism
Old or young, AI can show bias against different age groups by simply making incorrect assumptions. Young people may be assumed to be older, based on their health history, leading them toward inappropriate or irrelevant services. Older populations often have voice patterns that voice recognition software doesn’t correctly register, and social attitudes against the aged can easily come through in the actual programming,
Racism
This is probably the one you’ve heard most about, as the biases that AI programs are taught have created no small amount of consternation as people are misidentified by AI tools, leading to illegal surveillance and false arrests. However, the impacts reach beyond just law enforcement. Job recommendation platforms often favor certain racial groups, creating inequities in employment opportunities.
The issue becomes even more alarming when research has shown that some AI platforms, used to help field simulated distress calls, were more likely to send law enforcement along when the purported caller identified as African American or as a Muslim. Similar race-related issues have been seen in medical AI applications, which too commonly are trained with a less-than-diverse database.
Sexism
Finally, it is important that we recognize that the data that feeds our AI platforms has been sourced from data that largely reinforces gender norms. Going back to healthcare data—again, largely coming from the records of white, male-assigned bodies—leads to many healthcare applications assuming they are treating an assigned-male white body, with the typical symptoms that would be found in such. The same goes for many safety features, such as those found in automobiles.
This issue hits even in the home. Think about how quick you may be to yell at some of the smart assistants you have around. What gender are these smart assistants given by default? This has led to some push for more gender-neutral AI and other countermeasures to these tendencies.
At the End of the Day, Avoiding Bias Will Still Take Vigilance
While there may not be much that a small business or individual interacting with AI as an end user will be able to do, it is important that those businesses that are developing these AI models are held to standards to help minimize the chance of bias being baked into their algorithms.
For instance, they will need to be very careful that the data being used is actually important to the context, being both accurate and relevant to the AI’s end goals. This also means that the development process will need a general overhaul as well, with aforementioned algorithms being more intensely scrutinized, data collection in general covering a diverse community in the datasets, and involving more diverse groups in the development of these AI platforms.
In the meantime, it’s never a bad time to take a look at your own data collection and security practices. Is your information organized in a way that ensures it is useful, while also being protected from breach and other serious issues? We can help you find out… and assist you in resolving any issues we uncover. Give us a call at (270) 282-4926 to learn more.
Comments