DeepFakes: Is Seeing Really Believing?

In a world in which phrases like “Fake News” dominate the airwaves and fraudsters are trying to steal our money and information, we all need to be more vigilant about verifying the authenticity of what we see on the Internet or other information sources.  But, what do you do when you can’t believe your own eyes?

Deepfakes are fake videos or audio recordings that look and sound just like the real thing. Once only the bailiwick of Hollywood special effects studios and intelligence agencies producing propaganda, today anyone can download deepfake software and create convincing fake videos in their spare time.

So far, deepfakes have been limited to amateur hobbyists putting celebrities’ faces on porn stars’ bodies and making politicians say funny things. However, it would be just as easy to create a deepfake of an emergency alert warning an attack was imminent, or destroy someone’s reputation with a fake video, or disrupt a close election by dropping a fake video or audio recording of one of the candidates days before voting starts.

How deepfakes work

Seeing is believing, the old saw has it, but the truth is that believing is seeing: Human beings seek out information that supports what they want to believe and ignore the rest.

Deepfakes exploit this human tendency using Artificial Intelligence (AI) to manipulate the face of a video using a stored history of the facial and vocal expressions of the victim, to synchronize it with an audio track or replace it with someone else’s face. A second AI model tries to detect the forgery. The forger creates fakes until the second AI model can’t detect the forgery created by the first. The larger the set of training data, the easier it is for the forger to create a believable deepfake. This is why videos of former presidents and Hollywood celebrities have been frequently used in this early, first generation of deepfakes — there’s a ton of publicly available video footage to train the forger.

What can you do?

Detecting deepfakes is hard. Amateurish deepfakes can, of course, be detected by the naked eye. Other signs that machines can spot include a lack of eye blinking or shadows that look wrong. Machines that generate deepfakes are getting better all the time, and soon we will have to rely on digital forensics to detect deepfakes.  Also, check out this Critical Thinking and News Guide, which is authored and managed by Helen Lane, Instructional Design Librarian in the Gladys Marcus Library.  It shows you samples of deepfake videos and explains some of the science behind creating them.  In the meantime, be skeptical.  If you’re watching a video of someone doing something incredible or outrageous, seeing might not be believing.

About Cybersafe

The Division of Information Technology is dedicated to informing the community of the latest cybersecurity threats. Visit fitnyc.edu/cybersafe and stay tuned for emails from [email protected] for the latest from the Cybersafe campaign at FIT.

 

-Walter Kerner

Assistant Vice-President and Chief Information Security Officer

Read past issues of the CISO Updates Newsletter here.

Note:  This article was is largely based on “What are deepfakes? How and why they work” by J.M. Porup in CSO Magazine, 11/8/2018