The Evolution of Deepfakes

Back in November 2018, we published a warning about the rise of deepfakes. These AI-generated videos could mimic real people. At the time, they were clunky, easy to spot, and mostly seen as a novelty. Fast forward to 2025, and the line between real and fake has nearly vanished. Deepfake technology has advanced from basic video swaps to entire synthetic identities powered by generative AI. What once took hours and expensive software now takes seconds and a smartphone app.
How Deepfakes Have Changed
The technology once required highly skilled users and powerful graphic processing capabilities. Now, even casual users can create deepfakes utilizing AI tools. AI platforms allow real-time face and voice swaps in video calls, streaming platforms, and recorded media. It’s not just celebrities or public figures being impersonated—anyone with a social media presence can become a target.
Even more concerning, AI voice cloning technology can now recreate someone’s voice with as little as 3 seconds of audio. This capability is increasingly used in scams where attackers pretend to be family members in distress or executives demanding urgent wire transfers.
Deepfakes are also being integrated into misinformation campaigns. Some recent examples include fabricated videos of politicians making inflammatory statements, synthetic news anchors pushing false stories, and fake testimonies in court or media settings. These are often enhanced with metadata manipulation to fool even digital forensic tools.
Why It Matters
Deepfakes are no longer just about image manipulation or internet pranks. They are about narrative control, trust exploitation, and psychological manipulation. They undermine confidence in audio and video evidence, create confusion about what is real, and erode the public’s ability to make informed decisions.
In the corporate world, attackers are using deepfakes to impersonate executives in convincing video messages sent via Teams or Zoom, bypassing traditional Business Email Compromise (BEC) detection systems. Forbes reports that attackers are using deepfakes to impersonate executives and trick employees into transferring funds at high-end companies, such as Ferrari. Criminals are also conducting what’s called “Vishing 2.0” — using cloned voices in phone scams that are emotionally manipulative and hard to detect.
During elections or geopolitical conflicts, adversarial actors spread misinformation and sway public opinion, incite division, and amplify propaganda using fake videos. When combined with social media bots and algorithmic amplification, deepfakes become a powerful tool for large-scale disinformation.
On an individual level, people have had their likenesses used in fabricated criminal accusations and AI-generated scams on dating platforms. These personal attacks are designed to embarrass or extort individuals. These attacks cause real psychological harm and are increasingly hard to trace.
What You Can Do
Be Skeptical of What You See and Hear. Requests that stress secrecy and urgency are red flags. Just because it looks real doesn’t mean it is. Always verify with a second source.
Slow Down and Think. Deepfakes are often emotional and urgent. Pause before reacting.
Strengthen Your Sign-In Methods. Utilize Multi-factor Authentication (MFA) if available for your personal accounts and set your social media profiles to private. Make these changes so hackers can’t steal your pictures and use your likeness for deepfakes.
Learn How to Spot Deepfakes with this CBS News Chicago interview of Daniel Kendzior.
If you haven’t yet, we encourage you to take FIT’s mandatory Cybersafe cybersecurity training. It only takes about 30 minutes, and you can easily pause and resume whenever you like. This training is designed to help you recognize and defend against the types of threats that even global brands like Ferrari are currently facing.
Learn more about cybersecurity training or start your cybersecurity training now.
Rakesh Kumar
AVP of IT Infrastructure Services and Chief Information Security Officer
Information Technology
Fashion Institute of Technology
333 Seventh Ave, 13th floor
New York, NY 10001
(212) 217-3403
About Cybersafe
The Division of Information Technology is dedicated to protecting the FIT community from the latest cybersecurity threats by providing warnings and creating awareness through training and information-sharing. Visit fitnyc.edu/cybersafe for more information. And stay tuned for emails from [email protected] for the latest from the Cybersafe campaign at FIT.
