Initially deepfake videos may have been regarded as a quirky feature of the Internet but now they are probably the most formidable weapon in the hands of cyber criminals. With the digital masterpieces of fake vocals and images, deepfake, an AI-based media producing technique, has been exploited to the extent that even attentive individuals are falling victim to these imitations. Criminals are now using it to impersonate company executives, fabricate celebrity endorsements and even pose as family members in distress.IBM reports that data breaches involve AI-driven attacks, with phishing and deepfakes among the most commonly cited methods. As the technology becomes cheaper and easier to access, awareness of how these scams work has never been more important.A recent 2026 Anti-Fraud Technology Benchmarking Report by Association of Certified Fraud Examiners (ACFE) and SAS (industry research report widely cited in cybersecurity) revealed, “Deepfake social engineering saw the sharpest surge, with 77% of respondents reporting a slight-to-significant increase.” It added, “Only 7% of anti-fraud professionals say their organisations are more than moderately prepared to detect or prevent AI-fuelled fraud.”This is powerful statistical backing that deepfake scams are rapidly increasing while organisations are unprepared, reinforcing the warning about awareness and scepticism as key defences.
Deepfake technology is increasingly being weaponised by fraudsters to impersonate executives, celebrities and even loved ones, costing victims millions.
Below, we break down how deepfake fraud works, the real-world cases that illustrate its scale and what people can do to protect themselves.
What deepfake fraud is and how it works
Deepfake technology uses artificial intelligence to synthesise realistic audio, video and images of real people, making it appear they said or did something they never did. For criminals, this has created a new category of fraud that is far more convincing than a traditional phishing email.The most common forms include:
- impersonating senior executives to authorise fraudulent transfers
- fabricating celebrity endorsements to promote investment scams
- mimicking a family member’s voice to claim they are in an emergency
Danny Mitchell, Cybersecurity Writer at Heimdal Security, a Copenhagen-based cybersecurity company whose AI-powered protection platform is used by enterprises and security teams worldwide, has spent considerable time studying how AI is being weaponised against everyday people and organisations. He shared, “What makes deepfake fraud particularly dangerous is how accessible the technology has become. A few years ago, creating a convincing deepfake required significant technical skill. Now, tools are widely available online that can generate fake audio or video in minutes.”
Cybersecurity expert identifies the real-world scams to know about and the warning signs that can help you spot a deepfake.
Modern deepfakes can replicate a person’s voice, facial expressions and mannerisms with enough accuracy to bypass the instinctive checks most people rely on.According to a recent 2026 report published in The EDP Audit, Control, and Security Newsletter (EDPACS), author Ahmet Yiğitalp Tulga noted, “The rapid proliferation of artificial intelligence (AI) and deepfake technologies has introduced new and complex risks to individuals, companies, financial systems and digital trust.”This claims that deepfake fraud is no longer theoretical, it is already impacting financial systems and individuals at scale.“Traditional scams rely on urgency and anonymity,” Mitchell added. “Deepfake fraud goes further by borrowing someone’s identity completely, which is why victims so often don’t realise what has happened until it’s too late.”
Real-world examples of deepfake scams
Several high-profile cases in recent years show just how far this type of crime has progressed.
- The $26 Million Video Call Scam: An employee at a large Hong Kong-based multinational was tricked into transferring nearly $26 million to criminals after joining what appeared to be a legitimate internal video conference. Every other participant on the call was a deepfake. The fraud only came to light after the employee contacted their head office.
- The Deepfake Romance Gang: A fraud network wiped out in Asia had made use of AI-generated female profiles to rope in men in India, Taiwan and Singapore to develop their relationships. The group had managed to get as much as $46 million from the innocent victims who had simply trusted the people communicating with them before the law enforcement officers could catch up with them.
- Celebrities Used as Bait: In one recent case, a woman spent two years believing she was in an online relationship with actor Martin Henderson, known for his roles in Virgin River and Grey’s Anatomy. Using AI-generated voice messages and deepfake video, perpetrators convinced her to send $375,000.
“Criminals use celebrities because the familiarity people feel towards them can override rational judgement,” said Mitchell. “When someone believes a famous person has singled them out, the emotional pull is powerful. That is exactly what these fraudsters are counting on.”
Expert warns that scepticism around unexpected financial requests is now one of the most important defences people have against AI-powered fraud.
A 2026 systematic literature review in the Journal of Visual Communication and Image Representation established, “Deepfake… has taken synthetic media closest to the reality that the human eye cannot differentiate.” This directly underpins the core narrative that deepfakes are now convincing enough to deceive even careful individuals, explaining why victims fall for scams involving video calls, celebrity endorsements and family impersonation.
Warning signs a video or voice might be a Deepfake
Despite how convincing deepfakes have become, they are not flawless. There are still signs to look for.
- Unnatural facial movements or blinking: Deep fake videos sometimes have difficulty mimicking the tiniest aspects of human expressions. Be on the lookout for faces where the edges are blurry, people blinking at different times and smiles not in line with what the person is feeling.
- Audio that sounds slightly off: Voice made by artificial intelligence features a slight flatness or a different rhythm, while the background sounds may be very artificial.
- Mismatched lip movements: Synchronisation between speech and lips is often imperfect, particularly at faster talking speeds.
- Urgent requests for money or sensitive information: Any pressure to act quickly, transfer funds, or share personal details through an unusual channel should raise immediate concern.
“If you slow down and look carefully, there are often clues,” said Mitchell. “But the most practical warning sign isn’t technical. If someone is pressuring you to act fast or transfer money through an unusual channel, that alone should give you pause, no matter how convincing the video or voice appears.”Modelling how generative AI manipulates human decisions in social engineering fraud, an April 2026 research paper in arXiv highlighted, “AI has not invented a new crime… it has industrialised an ancient one: the manufacture of trust.” This is especially useful for supporting that deepfake fraud works because it exploits trust, not just technology.Protecting yourself from deepfake fraud comes down to one habit above all else: verify before you act. If you receive an unexpected request for money or sensitive information, even from someone who looks and sounds completely familiar, confirm it through a separate, trusted channel before doing anything. Call the person back on a number you already have. Check with a colleague. Take the time to question it.Danny Mitchell asserted, “It is also worth staying informed about how these scams are developing. AI-enabled fraud is moving quickly and the tactics criminals use are becoming more sophisticated. The more people understand how deepfakes work, the harder it becomes for fraudsters to use them successfully. Awareness, paired with a healthy scepticism around unexpected requests, remains one of the most effective defences available.”