Introducing The Next Generation Of Leaders And Thinkers

FaceApp and Artificial Intelligence Are Learning From Racist and Sexist Data

Artificial Intelligence (AI) is the technology of the future. New apps and new technologies that utilize AI are hitting the markets at record rates, but many of them have major flaws. Simply put, they are learning from racist and sexist data.

AI technologies are far from perfect. Editing apps equate beauty to whiteness in the case of the incredibly popular FaceApp with its “hotness” filter that makes Barack Obama’s skin lighter and his nose more European in the cover photo. Programs are associating women with “home” instead of “science” and résumé-reading technologies favor European names over African-American ones.

Here’s how AI works: large amounts of data are fed into a program and in a matter of seconds, computers sift through the data and respond with an output. It is the technology behind Google’s AutoDraw (which guesses what your makeshift doodle is and turns into a professional masterpiece) and behind Thread (a fashion start-up that uses AI to sort through 200,000 pieces of clothing to find the perfect pieces for your style). It is also the technology behind programs that are able to effectively identify which tumor cells are likely to lead to skin cancer and behind ones that are able to beat world-class professionals in poker. This is truly some awesome stuff to keep an eye on.

AI programs utilize something called “deep learning,” where programs are given millions and millions of images and are trained to notice something until they become incredibly accurate. In a very sci-fi way, the computers create neural networks that closely mimic those of humans. In the computer’s “brain” there are millions of connections and neural layers that all the different megapixels of an image pass through. However, somewhere in the training of these programs, engineers made a mistake.

In a recent study by Princeton University and Britain’s University of Bath, an AI program was tested using a Word-Embedding Association Test (WEAT), which deduces how correlated words are to one another (like “rose” and “flower”). The WEAT system is a type of Implicit Association Test, which examines what you personally associate with specific races, cultures, religions, sexualities, or genders. You can take a version of an IAT here. Something important to realize is that these tests come with a disclaimer that the results are not always valid: it is often hard to deduce bias in five minutes.

However, using a Stanford-designed algorithm called GloVe, the researchers tested an AI program millions of times with billions of word associations. With such an incredible amount of trials, there were able to definitively conclude this AI program associated male names with “career” and “math,” and female names with “home” and the “arts.” Furthermore, it is associated names typical of POC with negative connotations.

But why is this happening? AI programs are learning our culture and all of the negative biases that come with it.

As the study’s co-author Joanna Bryson puts it, “AI is just an extension of our existing culture.”

We are unconsciously training our programs to be this way. Even in something as simple as Googling “human hand” we can see human bias when it comes back with images of people that are majority white. When engineers feed these programs biased data, these programs are so malleable, they learn and relearn racism and sexism until they become ingrained aspects of the code.

In the case of FaceApp, the data set of human faces the program used to learned was built personally by the app’s engineers. And while they did not mean to code a racist program, they still hold the responsibility of doing so by teaching it that beauty equaled whiteness.

The upside in all of this is that as more issues arise about the flaws of AI, engineers become more aware of the problems and will hopefully work to solve them. But what we truly need is more women and POC in STEM-related jobs. AI is far from being perfect, and probably never will be, but the best thing we can do is to stop these programs from reinforcing negative stereotypes.

Comments are closed.

Related Posts