Jennifer Za Nzambi explores how over-reliance on algorithms may cause significant bias-induced problems.

Imagine posting ‘Good morning’ alongside a picture of yourself at work on Facebook and  getting arrested for it only a couple hours later. What seems like a silly anecdote, unfortunately happened to a construction worker in Palestine in October this year. His caption, written in Arabic, was automatically translated by Facebook algorithms as ‘attack them’, which led to his subsequent arrest because many perceived it as incitement. Whilst this particular situation is undoubtedly rare, it is probable that internet algorithms’ mistakes have, to various degrees, affected the online lives of us all.

The Workings of an Algorithm
Algorithms are sets of rules computer programs follow to achieve specific goals. In machine learning they are no longer dependent on people to tell them what to find, as they can learn without supervision, through pattern recognition in data. For instance, consider an algorithm used to distinguish mammals from birds. It trains on a collection of animal pictures, and develops a pattern recognition mechanism which allows it to tell each of the two types apart. What if, however, we confront the algorithm with an animal outside of its training set, let’s say a bat? Chances are that since bats’ phenotype at first sight is closer to that of birds, the algorithm will make a mistake. This would be an example of inductive bias that is also prevalent in many online advertisement algorithms.

Imperiling Justice
In 2014, an 18-year-old Afro-American woman Brisha Borden stole an unlocked bicycle. A year earlier, Vernon Prater, a middle-aged Caucasian male, stole tools worth approximately the same amount. COMPAS is an algorithm used in the US justice system to predict recidivism probabilities. After being presented with the cases of Brisha and Vernon, along with information about Brisha’s previous three counts of juvenile misdemeanour and Vernon’s record of three armed robberies, it concluded that Vernon posed a low risk, with a score of 3/10, and that Brisha was a high-risk individual with a score of 8/10. Whilst Brisha has not committed any recorded felony ever since, Vernon stole thousands of dollars’ worth of goods in the following years. ProPublica, an American news organisation, assessed the mechanism used by COMPAS and identified significant racial bias in its decision-making.

Evidence shows that despite committing about as many offences as white people, black people are more likely to be persecuted and arrested, which suggests that algorithm bias stems from that of humans. Does this mean that because of their prejudice, algorithms should not be used in matters as important as court dealings? It’s difficult to say. Since evidence suggests that having had lunch makes judges more lenient in their rulings, some form of oversight over human bias in decision-making still seems to be called for.

What can we do?
Albeit very efficient tools, currently being used in areas as diverse as social media and legislation, algorithms also pose multiple risks, of which dissemination of partisan news or biased sentencing are just a few examples. The solution to these problems could be to hold algorithms to the same standards as any other public decision mechanism. German Chancellor Angela Merkel suggested that algorithms with social impact prone to distort people’s perception should be made transparent by firms that use them, so that people can have some degree of oversight over algorithms that in turn oversee their decisions. This idea of checks and balances is prevalent in experts’ discourse.

Luckily, one does not have to be a software engineer to make up for some of algorithms’ heuristics. Online algorithms are working with current internet content as their ‘training set’. Everyone online can, however, change this set through posts on social media, publication of documents, or more generally through their online information consumption patterns. Facebook’s and YouTube’s algorithms learn about our tastes through our likes and choice of videos respectively. They may assume, for example, that if we watch a Donald Trump speech, we may also like to see a radical white supremacist video, thereby possibly creating  a radicalising effect. Algorithms derive their outputs from our online behaviour, and it seems that only if ‘biased’ people try to actively correct for bias caused by misinterpretations, can we move towards well-informed decision-making that people will want to get behind.


Leave a Reply

Your email address will not be published. Required fields are marked *

%d bloggers like this: