Is fair ai possible? - Coded Bias

Charlotte Moon
-
4.19.21

AI ethics has been a topic that has been widely debated for some time. With the likes of the biggest companies in the world including Google and Apple all integrating AI tech, it’s no wonder questions are being raised about how all this data is used and if it’s fair. 


In light of all these questions, the newly streamed Netflix show, Coded Bias aims to shed light on the racial, gender and capitalist inequalities that have been ingrained within AI technology. Directed by Shalini Kantayya, the show follows computer scientist, Joy Buolamwini, founder of Algorithmic Justice League, after discovering for herself that facial recognition didn’t recognise darker skin tones. The documentary follows Joy’s journey to push for the first-ever legislation in the U.S. to govern against bias in the algorithms that impact us all.


However, this isn’t just a US issue, the film also explores the bias ingrained in UK facial recognition technology used by the Metropolitan police. In a research project undertaken by Big Brother Watch, around 98% of people who were matched and identified as wanted by police were in fact innocent.  


One of the main concerns for me was the susceptibility to risk the big tech companies were willing to gamble with. These companies don’t know how this technology will affect us as a society, but they seem to have the attitude of ‘let’s try it and see if anything bad happens’, without us evening knowing! 


But why do it? Surely for such a high risk, the rewards must be substantial… Well yes but unfortunately not for society as a whole. The majority of our data is exploited by these big corporations for their own financial gains and, worst of all, we don’t have a true picture of the detrimental effects that it will have on our futures. 

It really seems that these large tech organisations are using our data like poker chips and are going all-in on black, for their own big returns. 


It’s Difficult to Know the Answer If You Don’t Understand the Question


But where does that leave us? If AI technology is a black box that is constantly learning from the environment it’s in, how can companies keep it from being biased if the fundamental structure of today’s society is flawed. 


A big problem is a lack of distributed information and openness to the public in what organisations are using our data for. Whether it’s a like, share or video views all this data is being collected and is used to make decisions for all of our futures. Coded Bias really amplifies the need for the public to be aware of what is going on and that legislation and procedures are needed to keep society as safe and as fair as we can.


What are your thoughts? Let me know on LinkedIn