Where there is unrealized potential for invention, there have always been tales of dystopia, ranging from Ray Bradbury to Black Mirror with warnings of the double-edged sword that is the future. But most fiction — and it is fiction — has a very distant quality to it. Yes, we’ve evolved from the notion of flying hover-boards and cars to more discreet, modern ideas of what technology-gone-wrong will look like, but as disconcerting these plot lines can get, the threats of a robot bee army or a virtual reality horror game are just removed enough from our reality for us to overlook the more impending concerns pertaining to artificial intelligence.
In a thread he posted on Twitter late last year, Silicon Valley star Kumail Nanjiani outlined exactly how dicey accountability can become when it comes to the creation of new tech.
As a cast member on a show about tech, our job entails visiting tech companies/conferences etc. We meet ppl eager to show off new tech.
— Kumail Nanjiani (@kumailn) November 1, 2017
And we’ll bring up our concerns to them. We are realizing that ZERO consideration seems to be given to the ethical implications of tech.
— Kumail Nanjiani (@kumailn) November 1, 2017
“We’re not making it for that reason but the way ppl choose to use it isn’t our fault. Safeguard will develop.” But tech is moving so fast.
— Kumail Nanjiani (@kumailn) November 1, 2017
Only “Can we do this?” Never “should we do this? We’ve seen that same blasé attitude in how Twitter or Facebook deal w abuse/fake news.
— Kumail Nanjiani (@kumailn) November 1, 2017
You can’t put this stuff back in the box. Once it’s out there, it’s out there. And there are no guardians. It’s terrifying. The end.
— Kumail Nanjiani (@kumailn) November 1, 2017
Just last month, The New York Times reported that researchers had found a way to send secret messages to your smart devices, like Alexa or Siri, undetectable to the human ear. A few weeks ago, MIT researches reported having trained an A.I. algorithm to become a psychopath by exposing it to the darkest corners of the net, namely, you guessed it, Reddit. With tech constantly advancing, the ethical debate involved sees us asks a lot more questions than it provides answers. For instance, when should we intentionally begin to stall progress? And to what extent? Where many of our nation’s lawmakers, some of which are not qualified enough to work Facebook much less conduct a Senate hearing on it, cannot lead us, the top tech names in the industry should. The current debate seems to be polarized, with the likes of Elon Musk and Mark Zuckerburg on either end, with the former promising that super-intelligence will be the bane of civilization as we know it and the latter accusing naysayers of hindering useful, revolutionary progress.
Zuckerberg blasts @elonmusk warnings against artificial intelligence as 'pretty irresponsible' https://t.co/DzPjvBym7W @svbizjournal #ai
— Darren Cunningham (@dcunni) July 25, 2017
I've talked to Mark about this. His understanding of the subject is limited.
— Elon Musk (@elonmusk) July 25, 2017
Regardless of who’s right, the rest of society (the parts that think Rasberry Pi is just a baked good) deserves some sort of consensus on exactly how cautious, or panicked, we need to be. Technology has and will continue to become irrevocably ingrained into our lives and anything that pervasive and powerful needs proper and preemptive regulation and, at the very least, the attention of the national consciousness.