Introducing The Next Generation Of Leaders And Thinkers

How Teens Ought To Think About A.I.

Elon Musk is often in the news for saying things. Months ago, the billionaire and owner of the electric car maker Tesla sparked outrage and headlines when he tweeted “FREE AMERICA NOW,” along with other Trumpian statements. And, in July, Musk was in the news again for another bold claim: this time about Artificial Intelligence (A.I.). Speaking to the New York Times, Musk made the prediction that A.I. could overtake human intelligence by 2025.  

How seriously should we take him? After all, A.I. is the technology of the future (and increasingly the present). We are the humans who’ll have to live with these strikingly technologically and philosophically challenging inventions. I’ll get to that, and whether this so-called “Singularity” (the point in time where A.I. overtakes humans) is even the smart thing to be worrying about when it comes to thinking computers. 

But first, let’s start with a pulse-take of A.I. today. 

Most modern A.I systems, like the speech recognition software that powers personal assistants such as Alexa, are based around “neural networks.”

A neural network is a type of computer structure that is loosely inspired by the human brain’s own network of biological neurons. Neural networks have many layers, with the inputs from one layer connected to the outputs of the next layer, and so on.  Each layer is made up of a series of nodes, or “neurons.” The initial nodes are set to a series of input numbers called “weights,” which enable a neural network to perform a calculation on some data. The weights, in the case of image recognition, might correspond to the pixel density in different parts of an image. 

A basic “nerual network” / Credit: Kjell Magne Fauske

The goal of a neural network is to provide a framework for computers to “learn” things.

The network is trained by a process called “backpropagation.” This means that a human assigns a certain number of images with labels, e.g. “contains a dog” and “does not contain a dog.” The network is then fed these images without knowing which ones are which. Initially, the neuron “weights” are set to random values. The network is trained over and over again, adjusting the values of the weights until the machine yields a correct output to constant inputs. If the series of weights lead a network to produce an end value that suggests that an image does indeed contain a dog, and it does, then those series of weights are maintained. If the network suggests an image is a dog and it isn’t, then the series of weights that made up that network are reconfigured, and the process repeats. 

Neural networks have been around as a concept for a long time. But they’ve only recently become the dominant approach to A.I., thanks largely to the boom in “Big Data.” Companies like Google can train their systems on huge amounts of training data. Armed with data like this, and increased computational power, neural networks can demonstrate some impressive, human-like intelligence. 

Here are some examples. 

Neural networks have enabled machines to surpass humans in many select areas. Indeed, they have demonstrated promise to do a lot of good. Neural networks that can pick up cancer, for example, years earlier than trained oncologists, or that can recognise patterns in human trafficking payments, and flag up illegal activity. 

Now, the term neural network – from the brain’s own neuron circuits, would lead one to believe that A.I. that uses them is mimicking the human thought process. That is, however, not really the case. To recognise dogs, a neural network needs millions of sample images. A human kindergartner only needs to see a dog once. 

An unconventional take on “dog walking” that could confuse a computer. / Credit: Thang Nguyen

There’s another flaw in a neural network’s professed “intelligence” — the “meaning barrier.” This is the barrier between an A.I. system recognising a pattern in language or images, and actually understanding what it is seeing. To be fair, these are loosely defined definitions. But the inhumanness of an A.I. system’s understanding shows up repeatedly in image recognition. A neural network might analyse an image and produce some probabilities: this has a 70% chance of being a cat, a 50% chance of being a baseball, and a 20% chance of being a hula dancer. If it was a cat, and it picked the top probability, then it got the exercise right. But surely we can’t say it understands what a cat is if it gave the image a 50% chance of actually being a baseball. 

Google Translate’s neural networks are good — essentially, they’re fluent in a language without ever understanding a word of it. But they’re not as perfect as human translators, because their non-understanding of meanings leaves them behind on certain words or phrases. Should the English word “bill” be translated into French as “proposed legislation,” as “duck beak,” “required payment” or “piece of paper”? A computer can’t pick without an in-depth understanding of context and meaning. Should a speech recognition module transcript a word as “to” or “too”? Ditto. 

Credit: Needapix

See, neural networks can be very clever.  But they effectively work off probabilities and numbers. In terms of artificial intelligence, the gap between understanding the meaning of the concept of a “dog” and just the numerical probability that, say, a certain pixel arrangement is a “dog” is still a big gap. That means slight tweaks can trick A.I. in a way a human couldn’t be. Researchers have found that changing certain pixel colours within images in a way that a human wouldn’t be able to tell the difference can significantly hinder an A.I.’s image recognition capacities. 

So there’s a lot holding A.I. back from realising Elon Musk’s worry. What is called specific artificial intelligence is already here: A.I. that can recognise dogs pretty well, or that can diagnose cancer. General artificial intelligence is not, though.

What would a generally intelligent machine be able to do? Well, it would be able to transfer the learning of its networks from one problem to another. Machines can beat the smartest humans at chess. But they can’t understand that they won. And that same program can’t then go and play the game of Go without plenty of human effort to code in the rules and techniques. A generally intelligent machine would also be able to make use of common sense, and intuition. A.I. can’t currently use common sense, or reason with new and unexpected scenarios. This is what makes self-driving cars so hard – getting a car to recognise road signs is becoming easier. Getting it to recognise it needs to stop when a giant moose stomps the car in front of it is harder. 

There’s a saying in the A.I. community: “easy things are hard.” From the very beginning, A.I. could perform big calculations much faster than a human. But it comes skidding to an unintelligent halt at simple challenges like holding a conversation, the “intuitive physics” of how objects behave, or reading emotions based on more than probabilities. 

Here’s an example cited by researcher Melanie Mitchell in her book Artificial Intelligence: A Guide for Thinking Humans:

Credit: Obama White House

What would it take for a computer to understand this image? Perhaps it could transcribe it “a group of people in a bathroom.” But would it know that some of the people are reflections in mirrors? Or would it recognise the oddity of seeing the president in a bathroom? Humans recognise that the person is standing on the scale, even though the white scale is not defined by sharp pixels against the background. We recognise that Obama has his foot on the scale, and our intuitive knowledge of physics lets us reason that Obama’s foot will cause the scale to incorrectly register the man’s weight.

Our intuitive knowledge of psychology tells us that the person will probably not be happy about his weight registering incorrectly, magnified because our knowledge of spaces tells us that the person is not aware of Obama, given he is looking in the opposite direction. We realise that the people around are smiling, probably because this scenario is funny. This is possibly made funnier by Obama’s serious status. Finally, we recognise that the prank is light-hearted, and the man will probably also laugh once he realises what is happening. As Andrej Karpathy, the person who originally explained the difficulty of reading this image wrote: “That’s frighteningly meta.” Importantly, it shows how complex and nuanced human understanding really is. 

The reasoning people like Musk use to explain their bold claims about A.I. is that technology is an exponentially self-bettering field. Perhaps A.I. researchers dismiss worries about superintelligence because we’re at the foot of the A.I. exponential curve, in the process of transitioning from decades of slow progress to an explosion in simulated thinking ability. Humans are dabbling in increasingly complex A.I. systems, often that they don’t’ understand, or are unable to fully predict the actions of. So it’s a perhaps. 

But the evidence today suggests that superintelligence, and probably intelligence, is a far way off. Mainly because so many of these simple things are poorly understood on humans, let alone machines. “I don’t work on not turning AI evil today for the same reason I don’t worry about the problem of overpopulation on the planet Mars.” as Baidu’s AI boffin Andrew Ng says. 

Yet, that doesn’t mean A.I. doesn’t present many worries in its current form. Autonomous weapons that incorrectly identify “enemies”, social media algorithms that radicalise users by directing them towards extremist content, self-driving cars that don’t have general intelligence. Coming back to the issue of being able to trick neural networks because they don’t really understand what they’re seeing, this also presents issues. What happens when you stick tiny “extra pixels” on road stop signs, leaving the self-driving car A.I. to recognise them as something entirely different (say, a 100-mile speed limit sign)?

As Pedro Domingos, a professor at the University of Washington said: “People worry that computers will get too smart and take over the world, but the real problem is that they’re too stupid and they’ve already taken over the world.”

Photo: Markus Spiske via Pexels

Related Posts