Skip to content Accessibility info

Cyber Bytes: What Is a Deepfake?

Cyber Bytes: What Is a Deepfake?

Deepfake technology is advancing rapidly. The applications are changing how people work, play, shop, learn, travel, interact, do business and care for themselves.

Learn about good and bad deepfakes, and ways to avoid falling into a deepfake trap.

What is a deepfake?

According to the U.S. Government Accountability Office, “deepfake” is a term combining artificial intelligence (AI), “deep learning” and “fake.”

Deepfakes are realistic images, sounds and movements created using AI and machine learning. Deepfake creators use technology to replace an actual person’s image or voice with an artificial likeness.

Once a costly and complex process, deepfakes are now automated and streamlined, usually achieved by a single individual and a computer.

Leveraging algorithms to create deepfakes

An algorithm does most of the heavy lifting, automating the process to render a fully formed deepfake.

Algorithms

An algorithm is a step-by-step set of instructions a computer uses to solve problems. With deepfakes, AI relies on complex algorithms to analyze and manipulate data.

Generative adversarial networks

Deepfakes are produced by generative adversarial networks (GANs), earning their moniker because they learn through adversarial relationships. Deepfakes use two opposing AI networks, a generator (creator AI) and a discriminator (investigator AI). The generator AI outputs data so real that the discriminator AI can’t tell if it’s fake anymore. The result is increasingly realistic image, video and audio outputs.

How scammers use GANs to create deepfakes

Using a GAN, a scammer can generate an avatar of a real person by inputting images of them from random directions under different lighting conditions. They can train a computer to replicate the person’s voice using just a few words. The result is a complete fabrication that looks and sounds like the real person.

The good and the bad of deepfakes

Deepfake technology can have negative and positive applications. Here’s the good and the bad of deepfakes:

The goodThe bad
Nonplayer characters orient, entertain and inform gamers.Nonplayer characters seem helpful, but trick users into giving out personal information.
Virtual reality presentations use a historical character’s likeness for immersive learning.Virtual reality presentations use the likeness of a character to spread disinformation.
Videos acted in real time deliver an interactive conversation with a historical or present-day figure to create immersive learning.Videos acted in real time deliver political messaging designed to mislead voters or spread disinformation.
AI chatbots offer Q&As for instant, tailored information about a product or service.AI chatbots pose as Q&A sessions, but are hosted by threat actor websites. They trick consumers into giving personal information.
AI generates voice-overs in your voice to narrate and edit typed presentations.AI generates voice-overs in your voice to fake and spoof your identity.
AI creates synthetic data about how to drive vehicles autonomously or test products without harming animals or humans.Threat actors corrupt synthetic data for autonomous vehicles or product testing to skew results or create chaos.
AI-generated personal avatars replicate your voice and likeness for meeting, working or socializing in the digital space.AI-generated personal avatars hack your voice and likeness to scam others or defame people in the digital space.
Biometric security, like facial and voice recognition, uses AI to secure and validate personal data.AI voice and face generators bypass security measures using deepfake technology to hack personal data.
HR uses AI-generated simulations to train employees on safety, workplace harassment and other topics.AI-generated simulations are used to fake an interview with HR, get hired and access confidential data and systems.

Deepfake incidents continue to escalate

Nefarious deepfakes affect all industries. In 2020, a cybercriminal targeted a bank manager using a deepfake to impersonate the bank’s director. The bank manager transferred $35 million to the criminal’s account. In another deepfake scheme, a cybercriminal impersonating a U.S. senator sent fraudulent messages to several foreign ministers.

If cybercriminals can target bank staff and politicians, they can target anyone. And social media sites make it even easier, offering a treasure trove of images, videos and audio files. It doesn’t take much to create a fully formed deepfake persona by hacking the average person’s social media account.

Sextortion and nonconsensual intimate imagery

The FBI Internet Crime Complaint Center (IC3) received 54,936 sextortion complaints in 2024, up 59% from 2023. These schemes resulted in $33.5 million in losses, a 9% increase over 2023.

Cybercriminals use AI to turn real photos and videos into sexually explicit images. They typically find these photos and videos on a person’s social media accounts or the open internet.

The perpetrators send nonconsensual intimate imagery (NCII) or revenge porn directly to the victim and demand payment. Sometimes, it’s a cruel retaliation created by someone known to the victim.

Shared images and videos can go viral or live in the cloud forever, making them difficult to remove. These kinds of deepfakes can be particularly traumatic, damaging a person’s life, sense of safety and livelihood. Some victims have resorted to suicide.

Laws are still evolving around deepfakes. These laws attempt to regulate:

  • Cyberstalking
  • Bullying
  • Harassment
  • Credible or false impersonation
  • Defamation

In 2025, President Trump signed the Tools to Address Known Exploitation by Immobilizing Technological Deepfakes on Websites and Networks Act, or the “Take It Down Act.” The Federal Trade Commission (FTC) will enforce the act. This law criminalizes the publication of NCII and requires websites and social media services to remove images at the victim's request.

Tips for spotting a suspected deepfake and what to do

Not everything you see or hear is real. Deepfakes will always fall back on the same techniques: They’ll appeal to your emotions to make you feel like you must respond immediately. But you can protect yourself. Take a breath and look for signs of a deepfake:

  • Glitchy or overly produced videos, voices, or texts that don’t look or feel right. Distorted hands are often a giveaway. Mismatched lips, eyes, hairlines, and chin lines that appear to jump or slide away from the person’s face can also be a sign.
  • Speech patterns or language that seems off. Listen to how the person is talking and the words they choose. A faker might be able to replicate an individual’s voice, but they won’t be able to imitate typical speech patterns or vocabulary. If the conversation seems irregular, question it. Ask to call the person back and use a trusted contact method.
  • Random calls that play on emotions. For example, if it's unusual for your accountant to call you with requests, question it. It's easy to fake a voice using AI. It could be a fraudster on the line.

Don’t succumb to the pressure

Slow down and think before you react:

  • Be cautious when sharing information. If you’re asked to disclose passwords, one-time passwords (OTPs), or account numbers, or transfer funds to a new account, disconnect. Financial institutions don’t ask for OTPs or passwords, or request that you transfer funds to safeguard them from hackers.
  • Be suspicious of anyone who demands immediate action. Scammers might use a deepfake to trick you into disclosing financial information, like sending you a link to verify your bank account number and password to prevent the deactivation of an account. Scammers have deepfaked grandchildren, pretending to need money to get out of immediate trouble, like an arrest. They play with your emotions to get you to do things without thinking.
  • Watch out for robocalls. These targeted voice, text and messaging scams trick you into calling their call centers, often under the guise of protecting you from a hacked account.
  • Challenge suspected deepfakes using safe words. If they can't verify the safe word, hang up and report the scam.
  • Don’t be scared or embarrassed to end contact. Scammers count on your trusting nature or fear of being rude to keep you on the line so they can bully or manipulate you. All you should worry about is your safety.

Fake-detecting technology

New security solutions and scam alert technologies are rising. One example is Deepware, a service that allows you to scan a suspected deepfake video. An internet search for “deepfake detection tools” will give you options. Companies are also adding features to their media, like:

  • Digital watermarks as small as a pixel that are imperceptible to humans but can be seen by computers
  • Metadata that’s baked into the file’s security
  • Blockchain tokens that show when a file’s been altered
  • Enhanced security questions to validate the authenticity of videos, posts and audio snippets
  • Banners that label content as AI-generated

The government’s role in identifying caller scams

The Federal Communications Commission (FCC) helps consumers identify robocalls and suspected roboscams using a caller ID authentication technology called STIR/SHAKEN. The technology protects consumers by validating calls and displaying ID text when a possible scam call occurs, usually with the word “Robo” or “Scam” prefacing the ID name. If you get a call with these words on the display, ignore it.

Report suspicious activity and stay vigilant

Deepfakes shake the core of how we trust. Seeing is no longer the basis for believing, at least not in the digital world.

If you suspect you’ve encountered a deepfake, report it to the FTC or IC3. Your information helps the authorities stay on top of cybercriminals and their latest scams.

Monitor what you share, use your judgement, practice cybersecurity and trust your gut. If it feels off, it probably is. Stay cybersafe out there!