Opinion: Deepfake Scams Are Exploiting Love and Trust Like Never Before

Image generated with DALL·E through ChatGPT

Opinion: Deepfake Scams Are Exploiting Love and Trust Like Never Before

Reading time: 7 min

AI-driven scams are on the rise, with cybercriminals using deepfake technology to create convincing videos and images that manipulate victims. From a French woman who believed she was helping Brad Pitt to a victim convinced by a high-quality deepfake, the lines between reality and deception are blurring. As AI tools evolve, so do the risks.

Many of us saw it in the news in January: a French woman lost over $850,000 to scammers because she believed she was giving the money to Brad Pitt—the man she thought she had been in a relationship with for over a year.

The scammers used advanced generative artificial intelligence to create “selfies” and fabricate evidence, convincing their victim, Anne—a 53-year-old designer going through a divorce—of the tragic story of a fake Pitt with urgent medical needs. They claimed his funds were frozen due to his divorce from Angelina Jolie.

Anne made the brave—and risky—decision to share her story publicly, facing millions of mockeries, insults, and virtual attacks. “Like a fool, I paid… Every time I doubted him, he managed to dissipate my doubts,” she said in a French Youtube show, as reported by the BBC. “I just got played, I admit it, and that’s why I came forward, because I am not the only one.”

She isn’t. A few months later the Spanish police arrested 5 people who scammed two other women by posing as Pitt as well. A few days ago, a Chinese man was also deceived into believing his online girlfriend needed money for medical procedures and to fund her business.

The victims received personalized videos and photographs—images that weren’t available anywhere else online—further convincing them of their deceivers’ elaborate lies.

A recent report from Entrust suggests that Deepfake attempts occur every 5 minutes. New cases of people scammed with generative AI emerge every day—a worrying trend, especially considering the thousands, or even millions, of people buried in debt and shame who don’t dare to report it to the authorities, let alone make it public.

DeepFake Scams On The Rise

Multiple studies and reports have raised alarms about the increasing frauds and cyberattacks powered by AI. Recent data from TrustPair’s 2025 Fraud Trends and Insights showed a 118% year-over-year increase in AI-driven fraud, as reported by CFO.

Hiya, an American company specializing in voice security and performance solutions, recently shared the results of a survey, unveiling that 31% of customers across six countries received deepfake calls in 2024, and 45% of them were scammed—34% of that group lost money and 32% got personal information stolen. On average, victims lose over $500 each to phone scams.

A few days ago, The Guardian revealed that an organized network in Georgia—in Eastern Europe—used fake ads on Facebook and Google to scam over 6,000 people across Europe, Canada, and the U.K. making $35 million through their operations.

Around 85 well-paid Georgian scammers used public figures such as the English journalist Martin Lewis, writer and adventurer Ben Fogle, and Elon Musk for their frauds. Scammers promoted fake cryptocurrency and other investment schemes, making victims transfer money through digital banks like Revolut—which recently got a banking license in the UK.

Advanced AI, More Sophisticated Scams

Cybercriminals have been using generative AI for the past few years, leveraging tools like ChatGPT to craft and translate engaging emails and generate persuasive text-based content. Now, as AI tools evolve, the use of AI-generated images and videos has increased.

A few weeks ago, ByteDance introduced its latest AI video tool, OmniHuman-1, capable of generating one of the most realistic deepfakes in the market. Meanwhile, more AI companies are developing similar technologies. It seems to be only a matter of time before these tools are also used for scams.

While these technologies can be used “in good faith” and even to counteract the rise of scams—like O2’s AI ‘Granny’ Daisy, designed to engage fraudsters in real-time calls and distract them from real victims—the consequences of their malicious use seem immeasurable.

At Wizcase, we freshly reported a 614% increase in “Scam-Yourself Attacks” noting how hackers use deepfake technologies to make fake content look “more realistic,” and how social media companies like Meta have had to intervene in pig butchering scams, as many threat actors use these platforms. In the research, Meta noted that many scams started in dating apps, proving how romantic love is amongst the most common baits—now and historically.

Love: A Powerful Bait

Cybercriminals aren’t just skilled at understanding and using advanced artificial intelligence—they also have a deep understanding of human intelligence. Threat actors know how to identify vulnerabilities, build trust, and make their requests at just the right moment.

The study Do You Love Me? Psychological Characteristics of Romance Scam Victims, published in 2018 by PhD Monica T. Whitty, explains how international criminal groups have performed dating romance scams for many years—even before the Internet—, and how middle-aged, well-educated women are likely to fall for this type of scam—just like Anne.

What can we expect now, eight years after this study, with science-fiction-like technology? Probably we are all more vulnerable than we think.

“Victims of this type of fraud are often people in search of meaning, who are emotionally vulnerable,” wrote Annie Lecompte, Associate professor at the University of Quebec in Montreal (UQAM), in a recent article published in The Conversation. “Although it is mocked and misunderstood, romance fraud is based on complex psychological mechanisms that exploit victims’ trust, emotions and vulnerability.”

A Broken Heart, An Empty Wallet

Liu—the surname of the Chinese man who recently lost 200,000 yuan, around $28,000, in an AI-powered scam—truly believed that his AI girlfriend was real, as he saw personalized photos and even videos. Building an increasingly strong emotional attachment… to his scammers.

While he didn’t provide more details of the context in the reports, another victim, the 77-year-old woman Nikki MacLeod, did. She also thought she was in a real relationship with an AI girlfriend and sent her £17,000—around $22,000—through bank transfers, PayPal, and gift cards.

“I am not a stupid person but she was able to convince me that she was a real person and we were going to spend our lives together,” said MacLeod to the BBC.

MacLeod was feeling lonely and sad when she met Alla Morgan in a group chat. After a while, MacLeod requested a live video, but Morgan said it wasn’t possible as she worked in an oil rig. When MacLeod started to suspect, Morgan started to send realistic videos. “She sent me a video to say ‘Hi Nikki, I am not a scammer, I am on my oil rig’, and I was totally convinced by it,” explained MacLeod. The content is available on the BBC’s website, and it’s easy to see why MacLeod believed it was real—it’s a high-quality deepfake.

The BBC asked an expert in cybersecurity and human-computer interaction at Abertay University, Dr. Lynsay Shepherd, to analyze the photos and videos MacLeod received. “At first glance it looks legitimate, if you don’t know what to look for, but if you look at the eyes – the eye movements aren’t quite right,” said Dr. Shepherd.

“The documents looked real, the videos looked real, the bank looked real,” said MacLeod. “With the introduction of artificial intelligence, every single thing can be fake.”

Did you like this article? Rate it!
I hated it I don't really like it It was ok Pretty good! Loved it!

We're thrilled you enjoyed our work!

As a valued reader, would you mind giving us a shoutout on Trustpilot? It's quick and means the world to us. Thank you for being amazing!

Rate us on Trustpilot
0 Voted by 0 users
Title
Comment
Thanks for your feedback
Loader
Please wait 5 minutes before posting another comment.
Comment sent for approval.

Leave a Comment

Loader
Loader Show more...