Deepfakes, Deep Trouble

Not long ago, Forbes.com had published an article about AI-generated pictures and videos (deep fakes) going viral. It reported, “Deepfakes first emerged on the Internet in late 2017, powered by an innovative new deep learning method known as generative adversarial networks (GANs),” the article’s author Rob Toews wrote. Of note however, is his observation that deepfakes threaten to grow from being an Internet oddity to being a widely destructive political and social force.

A Republican senator even called them the modern equivalent of nuclear weapons.

If you have ever come across a deepfake, you may have been able to spot it, as they are flickery, buggy, and movements of characters do not look natural.

But in a very short amount of time, they have gotten a whole lot more realistic. Plus, there is that old-as-time human tendency to seek out information that supports what we want to believe, and ignore the rest. Deepfakes are very capable of hacking this human vulnerability, and it can become a very real tool to spread disinformation and chaos.

And now there is very real incentive and opportunity to do so. Let me explain.

When we read news about COVID-19 accelerating use and development of technology, we must not forget this applies to the bad guys, as well.

A few months ago when shelter-in-place orders were implemented, I watched with amusement when during video calls, friends and colleagues put up virtual backgrounds of where they really wanted to be – by the beach, next to a palm tree, under the Statue of Liberty, inside a futuristic space ship… you name it.

If you can imagine it, artificial intelligence and virtual technologies could probably create it for you.

Now, news reports like Forbes’ are saying that virtual ‘humans’ have become so life-like and realistic, and it is a chilling feeling.

A few months ago, when countries around the world, began to shut down economic activities and order their citizens to stay home, the prospect of real, live, physical, face-to-face interactions with other humans once again, seemed very, very impossible. A new normal of virtual communications, and ONLY virtual communications with society (except the delivery guy) seemed extremely possible.

What if deepfake technology (which is already so lifelike) gets to the point that it can impersonate a family member over a video call and interact/engage with you so you hand over your life savings and what not? What if there is a deepfake of your boss, authorising you to do something not advantageous to the business?

There are already cyberattacks which use the ‘boss’ element to ‘trick’ employees to ‘leak’ information… and this is via only email!!

In what once seemed like an ‘only virtual communications world’, how could we possibly verify who is real and who is deepfake (there are reports that even a person’s voice can be recreated)? And if there are, are there steps and controls in place, for us to verify?

IT BYTES BACK! says: Economies are taking steps to restart activity again. People are slowly starting to venture out of their homes again, and seeing other peoples’ faces. I hope we never have to deal with deepfakes while on video calls with family, friends, and colleagues.