In this era of misinformation, we have grown accustomed to maintaining a sense of skepticism while reading article headlines, being careful not to fall for “fake news.” But with the continuing advancements in artificial intelligence technology, it has become increasingly apparent that we now must also be wary of fake videos, or deepfakes.
Deepfakes are manufactured or altered videos that make people appear to do or say something that they never did. Examples of some widely circulated deep-fakes are the Obama/Peele video, or the video of Kim Kardashian appearing to say that she “enjoyed manipulating people.” Without knowing that face and audio swapping technology was used, these videos can be very realistic and convincing to viewers. Altering video is nothing new, in fact, Hollywood has been doing it for decades. But it required vast amounts of money, time, and skilled artists. Now, with apps like FakeApp, this technology is becoming more and more accessible to the public. Currently, creating deepfakes still requires a long time to make, and it is still possible to distinguish them as fakes by slowing the video down or inspecting the mouth. But as AI grows and develops, the process will become faster and videos will become even harder to separate from reality.
Recently, a doctored video of House Speaker Nancy Pelosi speaking at a press conference highlighted the dangers of deepfake technology. The video was edited and slowed down to exaggerate any verbal missteps and to make it appear as if Pelosi was slurring her words. President Trump was quick to share this video on his twitter, tweeting “PELOSI STAMMERS THROUGH NEWS CONFERENCE,” suggesting that Pelosi is “crazy” and a “mess.” The video now has 6.35 million views.
“PELOSI STAMMERS THROUGH NEWS CONFERENCE” pic.twitter.com/1OyCyqRTuk
— Donald J. Trump (@realDonaldTrump) May 24, 2019
The implications that deepfake technology could have on the upcoming 2020 election is unnerving to think about. People could make videos that show a candidate in a bad light, and that deepfake could spread very quickly given the viral nature of the internet. And before the video is confirmed as a deepfake, millions have already seen it and have formed opinions based on falsified reality.
“A lie can go halfway around the world before the truth can get its shoes on, and that’s true,” David Doermann said at the House Intelligence Committee’s hearing on deepfakes last Thursday. Doermann, the Director of the University at Buffalo Artificial Intelligence Institute, suggested that social media companies could delay the spread of deepfakes by using tools to link audio and video to verify videos.
Other ideas discussed at the hearing included lawmakers working with social media companies, and imposing strict sanctions around troll farms, and candidates working with social media companies to set standards against misinformation.
Deepfakes not only allow people to construe an alternate reality, but there is also the issue of people rejecting real evidence as misinformation. For example, Trump has already suggested that the “Access Hollywood” tape in which he made lewd comments about women was doctored. This is known as the liar’s dividend– when people deny the truth by taking advantage of the public’s distrust with media and knowledge of deepfakes. This poses a huge threat to journalism as well, as the public distrust in news sources will grow and people could discount true evidence as deepfakes.
Having to sift between fact and fiction could create a sense of apathy among the public. But at a time where we sometimes feel like we cannot even trust our eyes and ears, it is important to remember to be aware of misinformation technology, to be skeptical as consumers, and to always seek the truth.