Get Ready For Deepfakes To Be Used In Financial Scams

from the it's-coming dept

Last month, scammers hijacked the Twitter accounts of former President Barack Obama and dozens of other public figures to trick victims into sending money. Thankfully, this brazen act of digital impersonation only fooled a few hundred people. But artificial intelligence (AI) is enabling new, more sophisticated forms of digital impersonation. The next big financial crime might involve deepfakes—video or audio clips that use AI to create false depictions of real people.

Deepfakes have inspired dread since the term was first coined three years ago. The most widely discussed scenario is a deepfake smear of a candidate on the eve of an election. But while this fear remains hypothetical, another threat is currently emerging with little public notice. Criminals have begun to use deepfakes for fraud, blackmail, and other illicit financial schemes.

This should come as no surprise. Deception has always existed in the financial world, and bad actors are adept at employing technology, from ransomware to robo-calls. So how big will this new threat become? Will deepfakes erode truth and trust across the financial system, requiring a major response by the financial industry and government? Or are they just an exotic distraction from more mundane criminal techniques, which are far more prevalent and costly?

The truth lies somewhere in between. No form of digital disinformation has managed to create a true financial meltdown, and deepfakes are unlikely to be the first. But as deepfakes become more realistic and easier to produce, they offer powerful new weapons for tech-savvy criminals.

Consider the most well-known type of deepfake, a “face-swap” video that transposes one person’s expressions onto someone else’s features. These can make a victim appear to say things she never said. Criminals could share a face-swap video that falsely depicts a CEO making damaging private comments—causing her company’s stock price to fall, while the criminals profit from short sales.

At first blush, this scenario is not much different than the feared political deepfake: a false video spreads through social or traditional media to sway mass opinion about a public figure. But in the financial scenario, perpetrators can make money on rapid stock trades even if the video is quickly disproven. Smart criminals will target a CEO already embroiled in some other corporate crisis, who may lack the credibility to refute a clever deepfake.

In addition to video, deepfake technology can create lifelike audio mimicry by cloning someone’s voice. Voice cloning is not limited to celebrities or politicians. Last year, a CEO’s cloned voice was used to defraud a British energy company out of $243,000. Financial industry contacts tell me this was not an isolated case. And it shows how deepfakes can cause damage without ever going viral. A deepfake tailored for and sent directly to one person may be the most difficult kind to thwart.

AI can generate other forms of synthetic media beyond video and audio. Algorithms can synthesize photos of fictional objects and people, or write bogus text that simulates human writing. Bad actors could combine these two techniques to create authentic-seeming fake social media accounts. With AI-generated profile photos and AI-written posts, the fake accounts could pass as human and earn real followers. A large network of such accounts could be used to denigrate a company, lowering its stock price due to false perceptions of a grassroots brand backlash.

These are just a few ways that deepfakes and other synthetic media can enable financial harm. My research highlights ten scenarios in total—one based in fact, plus nine hypotheticals. Remarkably, at least two of the hypotheticals already came true in the few months since I first imagined them. A Pennsylvania attorney was scammed by imposters who reportedly cloned his own son’s voice, and women in India were blackmailed with synthetic nude photos. The threats may still be small, but they are rapidly evolving.

What can be done? It would be foolish to pin hopes on a silver bullet technology that reliably detects deepfakes. Detection tools are improving, but so are deepfakes themselves. Real solutions will blend technology, institutional changes, and broad public awareness.

Corporate training and controls can help inoculate workers against deepfake phishing calls. Methods of authenticating customers by their voices or faces may need to be re-examined. The financial industry already benefits from robust intelligence sharing and crisis planning for cyber threats; these could be expanded to cover deepfakes.

The financial sector must also collaborate with tech platforms, law enforcement agencies, journalists, and others. Many of these groups are already working to counter political deepfakes. But they are not yet as focused on the distinctive ways that deepfakes threaten the financial system.

Ultimately, efforts to counter deepfakes should be part of a broader international strategy to secure the financial system against cyber threats, such as the one the Carnegie Endowment is currently developing together with the World Economic Forum.

Deepfakes are hardly the first threat of financial deception, and they are far from the biggest. But they are growing and evolving before our eyes. To stay ahead of this emerging challenge, the financial sector should start acting now.

Jon? Bateman is a Cyber Policy Initiative, Technology and International Affairs Fellow at the Carnegie Endowment for International Peace.

Filed Under: , ,

Rate this comment as insightful
Rate this comment as funny
You have rated this comment as insightful
You have rated this comment as funny
Flag this comment as abusive/trolling/spam
You have flagged this comment
The first word has already been claimed
The last word has already been claimed
Insightful Lightbulb icon Funny Laughing icon Abusive/trolling/spam Flag icon Insightful badge Lightbulb icon Funny badge Laughing icon Comments icon

Comments on “Get Ready For Deepfakes To Be Used In Financial Scams”

Subscribe: RSS Leave a comment
8 Comments
Anonymous Coward says:

Missing technique

Scams are perpetrated because they are profitable. Scams are profitable because the risk of getting caught, versus the amount of money to be obtained, is favorable. In particular, scams are often profitable because even when the victim reports the scam, it is very difficult or outright impossible to reverse the damage, so the scammer walks away with the money. This suggests an obvious countermeasure: make it more difficult for a scam to remain profitable after it has been discovered and reported to the authorities.

For the financial industry, this suggests having agreements in place to forcibly unwind bad transactions. Scammers try to send the money to non-cooperating banks as soon as possible, so that the transaction cannot be unwound. Counter this by an industry-wide policy that transfers to entities that refuse to participate in a forced-unwind protocol are delayed by such a margin that there is a good chance the victim will report the scam before the transfer to the non-cooperating entity. This will motivate more parties to join in, else their customers will be complaining about the long delay. Participating parties would in turn enforce that their customers can only use these funds for purposes that can be unwound. Notably, this would mean no cash withdrawals of transferred funds until after the delay period. We already have some protocols like this with paper checks, but that was done in case the check bounced. In cases where the sending bank confirms the account is funded, the receiving entity seems to be quite willing to release the funds more quickly. Reverse that, and it becomes much easier to recover from certain types of scams.

PaulT (profile) says:

Re: 2FA

"I wonder if these accounts had 2-factor-auth enabled and if yes then how did the attacker managed to work around it."

If you’re referring to the Twitter accounts, the attackers gained direct access to Twitter’s internal account admin console, so they could do whatever they wanted with the accounts.

https://www.vice.com/en_us/article/jgxd3d/twitter-insider-access-panel-account-hacks-biden-uber-bezos

Add Your Comment

Your email address will not be published. Required fields are marked *

Have a Techdirt Account? Sign in now. Want one? Register here

Comment Options:

Make this the or (get credits or sign in to see balance) what's this?

What's this?

Techdirt community members with Techdirt Credits can spotlight a comment as either the "First Word" or "Last Word" on a particular comment thread. Credits can be purchased at the Techdirt Insider Shop »

Follow Techdirt

Techdirt Daily Newsletter

Ctrl-Alt-Speech

A weekly news podcast from
Mike Masnick & Ben Whitelaw

Subscribe now to Ctrl-Alt-Speech »
Techdirt Deals
Techdirt Insider Discord
The latest chatter on the Techdirt Insider Discord channel...
Loading...