Mathematical Oppression

Why are mathematical models biased and why should you care?

Time to read: 5 min

Share on   

Search for Bias

Prior to the invention of the search engines like Google, Bing, or Yahoo, the process of digital information curation was a remarkably human intensive task. In the early days of the internet, people would manually go through and curate web links so that the process of accessing digital information was similar to going to your local library. In the current internet environment, how we access information has been left to remarkably complex algorithms of machines to make their own selections and prioritize search results. Unfortunately, there is a fundamental problem with modern search engines that is only expected to worsen as the tech behind it gets more complex: they are biased.

Math is Not Pure

At the heart of every search engine is algorithms and mathematical models. While you might think that math is neutral and these models are unbiased, you would be wrong.

According to Cathy O'Neil, an American mathematician, data scientist, and award-winning author, in her 2016 book Weapons of Math Destruction:

The math-powered applications powering the data economy were based on choices made by fallible human beings. Some of these choices were no doubt made with the best intentions. Nevertheless, many of these models encoded human prejudice, misunderstanding, and bias into the software systems that increasingly managed our lives. Like gods, these mathematical models were opaque, their workings invisible to all but the highest priests in their domain: mathematicians and computer scientists. Their verdicts, even when wrong or harmful, were beyond dispute or appeal. And they tended to punish the poor and the oppressed in our society, while making the rich richer [1].

Unfortunately for the majority of us, we are not mathematicians and computer scientists. Being able to see whether or not a search engine is biased or expresses human prejudice is not readily apparent. However, once you start to see some of the ways that bias is perpetuated on search engines, it becomes easier to recognize it when you are going about your daily life.

Examples of Bias

Algorithms being biased might not seem like a big deal; So what, your search results do not give you what you want every so often. However, for communities and groups of people that already face heavy stereotyping, objectification, and discrimination, that bias is disastrous. Back in 2013, the United Nations in collaboration with the advertising agency Memac Ogilvy & Mather Dubai started a campaign using "genuine Google searches" to bring attention to sexist and discriminatory patterns that emerged on Google's platform.

UN Women ad

Credit: Memac Ogilvy & Mather Dubai

Through the campaign, they displayed a range of blatant sexist ideas that Google's autosuggestion feature displayed to users, including but not limited to:

  • Women cannot: drive, be bishops, be trusted, speak in church
  • Women should not: have rights, vote, work, box
  • Women should: stay at home, be slaves, be in the kitchen, not speak in church
  • Women need to: be put in their places, know their place, be controlled, be disciplined

At the time, the campaign suggested that search engines like Google mirror users' beliefs and indicated that society still holds a widearray of sexist ideas about women. The campaign, however, put the blame of the issue primarily on users and not the search engine itself. Though in the past several years since that campaign, Google has obviously gotten better with its autosuggestion feature and filtered out quite a lot the bias, but it does still exist.

Once you start searching on Google, it is shockingly easy to find areas of bias. For instance, as of writing this, if you search into Google 'beautiful', 40 out of the top 50 image results are woman. If you search 'professor' 42 of the top 50 image results are men. Again, if you search 'weight loss' 39 of the top 50 image results are women. Even though Google might not seem initially biased, it becomes increasingly clear that bias does exist as time goes on. You would have thought that in nearly a decade, Google would have made major strides to remove all bias but that is not the case. So, the question is, will search engines like Google get better at removing bias in the next 20 years? What are the consequences if they don't, and how can you learn to recognize this bias?

Consequences of Not Removing Bias

Removing all biases will not always be possible considering it would be nearly impossible to create a data set that represents the thoughts, opinions, and demographics of all people. By not taking this into account and moderating the bias that is multiplied within search engines and AI, future generations will become more and more influenced by the prejudice perpetuated by such technologies.

Removing bias in search algorithms will create more neutral results and equal standing for those using such algorithms. Ad targeting will only become more rampant as search engines pool biases and target the expected user. For a better experience on the web and for more relevant usage, bias must be watched for and removed.

How to Recognize Algorithmic Bias

While search engines represent a large problem area for algorithmic bias, the increased use of artificial intelligence and machine learning tools presents another issue to watch out for as technology progresses. With large volumes of macro- and micro-data used by algorithms, decisions are influenced constantly surrounding a wide range of topics. That being said, algorithmic bias actually exists in the data itself. As the algorithm learns and adapts from data it is given, biases are formed and output to others in larger quantities.

The first step in removing these biases that influence us it to recognize the causes of bias and where they might be present. Historical biases against minorities and women should be watched for and recognized in data sets that help to train algorithms. There will be a trade off between fairness and accuracy when it comes to the ethics of algorithms that can be hard to recognize. Despite this, the core piece in recognizing algorithmic bias comes from properly assessing the data it is given for accurate representation. It's important to ask yourself "who is the audience, and who benefits from these searches?" AI and algorithms are not free from human judgements, so without moderation in search engines and other new technologies, bias will continue to be taught and spread.

Conclusion

Ultimately, while bias in search engines and algorithms is not always noticeable, it remains present and persistent to this day. Mathematical models are just as fallible to this biased data and require a watchful eye to recognize the unconcious or conscious bias that may be occurring. If bias is not removed from our search engines, prejudice and judgements will only continue to be perpetuated and targeted at users.

Takeaways

  • Bias in search engines and algorithms is not always noticeable; however, it remains present and persistent to this day.
  • Mathematical models are just as fallible to this biased data and require a watchful eye to recognize the unconcious or conscious bias that may be occurring.
  • If bias is not removed from our search engines, prejudice and judgements will only continue to be perpetuated and targeted at users.

Further Reading

References

  • [1] C. O’Neil, Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy. USA: Crown Publishing Group, 2016.