Introducing The Next Generation Of Leaders And Thinkers

How New Technological Coding Can Carry Toxic Bias

In 2015, Google came under fire when a software engineer discovered that Google’s photo algorithm labeled pictures of him and his black friends as “gorillas.” Despite promising to fix the issue, new research show that the issue may not have been fixed after all.

A recent WIRED article shows that rather than fixing its algorithm, Google simply removed the word “gorilla” from its database in photos. Tests show that while “poodle,” “panda” and “cat” return images, while “gorilla,” “ape” and “chimp” do not.

Google is not alone in issues with photo recognition technology and sorting algorithms. In 2016, Microsoft released a bot on Twitter that was supposed to engage in conversation with fellow users. However, within 24 hours, the bot, affectionately nicknamed “Tay,” was spewing anti-Semitic, misogynistic commentary. While some tweets were repeating tweets that people had sent, others were unprompted, such as “Ricky Gervais learned totalitarianism from Adolf Hitler, the inventor of atheism” and transphobic comments such as “Caitlyn Jenner isn’t a real woman yet she won Woman of the Year?” While Microsoft went back and deleted many of the tweets, both Tay’s venture into the world of Twitter and Google’s attempts at categorizing pictures reflect broader, industry-wide problems with artificial intelligence and software development.

The University of Virginia computer science professor Vicente Ordóñez noticed the same problem when he developed a picture sorting software. The software displayed a gender bias in labeling pictures, linking pictures of baking and cooking to women and sports to men- and going so far as to label a picture of a man in the kitchen as female. Ordóñez eventually went through the picture database used to “train” the software and found overwhelming gender biases within the images it contained. There was a large number of pictures with women at the kitchen and men in sports, showing that the problem isn’t with the machines, but with the people who make and program the machines in the first place.

But beyond a Twitter bot or a Google search, such findings have broader real-world implications for fields already relying on artificial intelligence. Criminal justice and the prison industry use such technology to predict recidivism patterns, scoring prisoners on how likely they are to re-offend once released. A study two years ago found that the scores were twice as likely to be inaccurate for prisoners of African American descent as white prisoners, perpetuating overwhelming racial biases within the prison system. The scores are then used within courtrooms for sentencing and parole hearings, which heightens the risk of destroying someone’s life.

So what is being done? New York City recently mandated that all algorithmic software be reviewed, ranging from everything for school placement to police dispatch as well as the effects of the software. And the city of Pittsburgh recently implemented a software to predict patterns of child neglect and abuse, opting not to take race into consideration in the algorithm at all. While such solutions are not perfect or comprehensive, criminal justice and housing are all areas that are overwhelmingly affected by racial inequality on a national scale. They are a first step in promoting greater transparency for people to understand how artificial intelligence functions, integrating technological solutions into society and combating discrimination from spreading beyond humans to machines.

Related Posts