Deepfake Of Tom Cruise Has Everyone Freaking Out Prematurely

from the not-deep-enough-yet dept

You may have heard that in recent days a series of deepfake videos appeared on TikTok of a fake Tom Cruise looking very Tom-Cruise-ish all while doing mostly non-Tom-Cruise-ish things. After that series of short videos came out, the parties responsible for producing them, Chris Ume and Cruise impersonator Miles Fisher, put out a compilation video sort of showing how this was all done.

As you can see, this was all done in the spirit of educating the public on what is possible with this kind of technology and, you know, fun. Unfortunately, some folks out there aren’t finding any fun in this at all. Instead, there is a certain amount of understandable fear for how this technology might disrupt our lives that is leading to less understandable conclusions about what we should do about it.

For instance, some folks apparently think that deepfake outputs should be considered the intellectual property of those who are the subjects of the deepfakes.

A recent deepfake of Hollywood star “Tom Cruise” sparked a bombshell after looking very close to real. Now it has been claimed they are on their way to becoming so good, that families of the dead should own the copyright of their loved ones in deepfakes.

Lilian Edwards, a professor of law and expert in the technology, says the law hasn’t been fully established yet. She believes many will claim they should own the rights, while some may not.

She told BBC: “For example, if a dead person is used, such as (the actor) Steve McQueen or (the rapper) Tupac, there is an ongoing debate about whether their family should own the rights (and make an income from it).”

Now, I want to be somewhat generous here, but this is still a terrible idea. Let’s just break this down practically. In the interest of being fair, it is understandable that people would be creeped out by deepfake creations of either their dead relatives or themselves. Let’s call that a given. But why is the response to that to try to inject some kind of strange intellectual property right into all of this? Why should Steve McQueen’s descendants have some right to control this kind of output? And why are we using McQueen and Tupac as the examples here, given that both are public figures? What problem does this solve?

The answer would be, I think: control over the likeness rights of a person. But such control is both fraught with potential for overreach and over-protection coupled with a history of a total lack of nuance in what should not be considered infringing behavior or what is fair use. Techdirt’s pages are littered with examples of this. Add to all of this that purveyors of deepfakes are quite often internationally located, anonymous, and unlikely to pay the slightest attention to the kind of image likeness rights being bandied about, and you really have to wonder why we’re even entertaining this subject.

And then there are the people who think this Tom Cruise deepfake means that soon we’ll simply have no functional legal system at all.

The CEO of Amber, a video verification site, believes deepfake evidence will raise reasonable doubt. Mr Allibhai told us: “Deepfakes are getting really good, really fast.

“I am worried about both aural/visual evidence being manipulated and also just the fact that when believable fake videos exist, they will delegitimise genuine evidence and defendants will raise reasonable doubt. When the former happens, innocent people will go to jail and when the latter happens, criminals will be set free. Due process will be compromised and a core foundation of democracy is undermined. Judges will drop cases, not necessarily because they believe jurors will be unable to tell the difference: they themselves, and most humans for that matter, will be unable to tell the difference between fact and fiction soon.”

Folks, we really need to slow our roll here. Deepfake technology is progressing. And it’s not progressing slowly, but nor is it making insane leaps heretofore unforeseen. The collapse of the legal system as a result of nobody being able to tell truth from fiction may well come one day, but it certainly won’t be coming as a result of the harbinger of a Tom Cruise deepfake.

In fact, you really have to dial in on how the Cruise videos were made to understand how unique they are.

The Tom Cruise fakes, though, show a much more beneficial use of the technology: as another part of the CGI toolkit. Ume says there are so many uses for deepfakes, from dubbing actors in film and TV, to restoring old footage, to animating CGI characters. What he stresses, though, is the incompleteness of the technology operating by itself. Creating the fakes took two months to train the base AI models (using a pair of NVIDIA RTX 8000 GPUs) on footage of Cruise, and days of further processing for each clip. After that, Ume had to go through each video, frame by frame, making small adjustments to sell the overall effect; smoothing a line here and covering up a glitch there. “The most difficult thing is making it look alive,” he says. “You can see it in the eyes when it’s not right.”

Ume says a huge amount of credit goes to Fisher; a TV and film actor who captured the exaggerated mannerisms of Cruise, from his manic laugh to his intense delivery. “He’s a really talented actor,” says Ume. “I just do the visual stuff.” Even then, if you look closely, you can still see moments where the illusion fails, as in the clip below where Fisher’s eyes and mouth glitch for a second as he puts the sunglasses on.

This isn’t something where we’re pushing a couple of buttons and next thing you know you’re seeing Tom Cruise committing a homicide. Instead, creating these kinds of deepfakes takes time, hardware, skill, and, in this case, a talented actor who already looked like the subject of the deepfake. It’s a good deepfake, don’t get me wrong. But it is neither easy to make nor terribly difficult to spot clues for what it is.

All of which isn’t to say that deepfakes might not someday present problems. I actually have no doubt that they will. But as with every other kind of new technology, you’re quite likely to hear a great deal of exaggerated warnings and fears compared with what those challenges will actually be.

Filed Under: , , ,

Rate this comment as insightful
Rate this comment as funny
You have rated this comment as insightful
You have rated this comment as funny
Flag this comment as abusive/trolling/spam
You have flagged this comment
The first word has already been claimed
The last word has already been claimed
Insightful Lightbulb icon Funny Laughing icon Abusive/trolling/spam Flag icon Insightful badge Lightbulb icon Funny badge Laughing icon Comments icon

Comments on “Deepfake Of Tom Cruise Has Everyone Freaking Out Prematurely”

Subscribe: RSS Leave a comment
32 Comments
This comment has been deemed insightful by the community.
Ehud Gavron (profile) says:

Bad news

Sorry, relatives of dead people, you don’t own the rights to their likeness.

https://www.owe.com/resources/legalities/7-issues-regarding-use-someones-likeness/

This means — in simple words — you have no rights to sue if someone uses your dead relative’s likeness, your living relative’s likeness, or your likeness.

The court system has enough burdens to handle without these idiots who think intellectual property rights are a thing they have… but don’t – under current US law.

Ehud "I’m not a lawyer and I’m not a stupid litigious jerk" Gavron

This comment has been flagged by the community. Click here to show it.

Ehud Gavron (profile) says:

Re: Re: Bad news -- NO, you're WRONG.

In America we don’t just let lawyers decide everything.

Is that you and the mouse in your pocket? The lawyers do decide everything, but that’s after they get upgraded lawyer–>lawmaker.

You do not have any rights to your likeness, like it or not. Neither do randos that say they’re related to you. NONE, ZERO, ZILCH.

If you don’t like it, feel free to run for Congress and pass more stupid laws… and good luck finding a constitutional support for your ideas!

E

This comment has been flagged by the community. Click here to show it.

Upstream (profile) says:

Re: Danger is not in the best, but in surveillance quality.

Anyone could easily be framed with just a bit of supporting evidence / testimony to blurry video. Do-able NOW with off-the-shelf software.

I think this could be a real problem, possibly sooner rather than later. Combine it with juries willing to believe BS "forensic evidence" like this and a corrupt, authoritarian government that is quite willing to present such "evidence" to discredit and imprison any opposition, and you have the makings for a distopian nightmare.

This comment has been deemed insightful by the community.
PaulT (profile) says:

Re: Danger is not in the best, but in surveillance quality.

"Anyone could easily be framed with just a bit of supporting evidence / testimony to blurry video"

That already happens, and sometimes the video isn’t even necessary. Every era in history has false convictions based on shaky evidence, be that biased witness testimony, violently extracted confessions, "expert" witnesses who are nothing of the sort, DNA or other physical evidence that’s been contaminated or falsified, etc.

Deepfakes present an extra challenge in that people who have come to consider video evidence as conclusive will have to learn that it’s no longer reliable, and a new strand of real expertise will have to become established to determine whether video is real or even admissible. But, this is nowhere near a new thing.

This comment has been deemed insightful by the community.
Anonymous Coward says:

Re:

they themselves, and most humans for that matter, will be unable to tell the difference between fact and fiction soon."
Folks, we really need to slow our roll here.

True, we need to slow our roll here, but because we’re already at that place. It’s called Qanon. Welcome to the surreal world, we’ve got cookies.

This comment has been deemed insightful by the community.
PaulT (profile) says:

Re: Re: Re:

"It’s called Qanon."

Yeah, that’s the scariest part of this in reality. The cultists are already dismissing video evidence of anything bad that Trump does/says as deepfakes, and I have no doubt that some are already working on bolstering their wilder claims with deepfakes showing their opponents doing horrific things (which they will, of course, accept unquestioningly at face value).

If they already believe the insane things they do with zero evidence, what can they be convinced to believe with believably faked evidence – and what will they be inspired to do as a result of that?

Anonymous Coward says:

If you’ve never worked with 3D, you have no conception of just how difficult this is. For one, it takes real computer horsepower, not the stuff you’ll find at Walmart type computers.

Just doing stills eats time for breakfast, not to mention video, frame by frame. And he’s right, we know at a glance most of the time if it’s fake. It will appear off and it takes a lot of skill as well as time to pull it off.

PaulT (profile) says:

Re: Re:

"For one, it takes real computer horsepower, not the stuff you’ll find at Walmart type computers."

Yes, but why would you assume that people are going to be doing this on their home PC? There’s already phone apps that provide deepfaked short videos of your face on top of some celebrity or movie, which just use a server farm somewhere to deliver the results in seconds. Yes, these are likely to have taken a lot of prep work to get the underlying models ready and nobody’s really going to be fooled by those. But, the basis of the tech is there, and there’s no reason to believe things would depend on whether or not someone’s desktop is up to the job.

Anonymous Coward says:

Due process will be compromised and a core foundation of democracy is undermined. Judges will drop cases, not necessarily because they believe jurors will be unable to tell the difference: they themselves, and most humans for that matter, will be unable to tell the difference between fact and fiction soon.

Yep this is totally a concern. Because the current for of our judicial system is entirely based on the early forms of digital evidence gathering. There has never been any major period in time where digital evidence was not a core part of the judicial system, and thus with that tenant removed, it will collapse in on itself.

/s

Rekrul says:

We take CGI for granted today and stuff like this doesn’t surprise us, but I remember back in 1994 when it was a big deal for them to put Brandon Lee’s face on a stunt double for a couple distance shots of the character. Now you could practically remake the entire movie with his likeness on a room full of consumer level systems.

Going even further back, it took them months to render all the CGI for the movie The Last Starfigher in 1984 and now those graphics look like a video game from a system a couple of generations ago. Systems can now generate graphics in real time that blow that movie away.

Another 10-20 years and you’ll probably be able to generate a flawless CGI copy of a person. The voice might still be a problem though. Computer speech has improved a great deal, but it still doesn’t sound natural.

aerinai (profile) says:

Man -- we better reign in Hollywood...

So… if you or I make a video and use video manipulation, it is a ‘danger to society’. But if Hollywood does it, it is entertainment. I don’t know about you, but Michael Bay makes it look really good when he’s blowing stuff up. How am I supposed to tell the difference between that and ‘real life’!

So… rawr rawr rawr… think of the children! Ban video editing software! /s

Anonymous Coward says:

fwiw: I think that that it should be possible to create an algorithm which is capable of detecting these manipulations.

In the mean time, there are things to look for.

Quote from the link below:

The Detect Fakes experiment offers the opportunity to learn more about DeepFakes and see how well you can discern real from fake. When it comes to AI-manipulated media, there’s no single tell-tale sign of how to spot a fake. Nonetheless, there are several DeepFake artifacts that you can be on the look out for.

1) Pay attention to the face. High-end DeepFake manipulations are almost always facial transformations.
2) Pay attention to the cheeks and forehead. Does the skin appear too smooth or too wrinkly? Is the agedness of the skin similar to the agedness of the hair and eyes? DeepFakes are often incongruent on some dimensions.
3) Pay attention to the eyes and eyebrows. Do shadows appear in places that you would expect? DeepFakes often fail to fully represent the natural physics of a scene.
4) Pay attention to the glasses. Is there any glare? Is there too much glare? Does the angle of the glare change when the person moves? Once again, DeepFakes often fail to fully represent the natural physics of lighting.
5) Pay attention to the facial hair or lack thereof. Does this facial hair look real? DeepFakes might add or remove a mustache, sideburns, or beard. But, DeepFakes often fail to make facial hair transformations fully natural.
6) Pay attention to facial moles. Does the mole look real?
7) Pay attention to blinking. Does the person blink enough or too much?
8) Pay attention to the size and color of the lips. Does the size and color match the rest of the person’s face?

These eight questions are intended to help guide people looking through DeepFakes. High-quality DeepFakes are not easy to discern, but with practice, people can build intuition for identifying what is fake and what is real. You can practice trying to detect DeepFakes at Detect Fakes.

Detect DeepFakes: How to counteract misinformation created by AI

This comment has been deemed insightful by the community.
PaulT (profile) says:

Re: Re: Re:

You miss the most important point – it’s not about making the fakes undetectable, it’s about fooling people when it’s important to do so. Footage might not be able to fool a court of law that has access to proper forensics and experts who can disseminate the footage.

But, that doesn’t mean that people won’t, say, release a bunch of faked videos just before an election in order to rile up a certain section of the voting bloc who won’t believe the "mainstream media" when they confirm they’re fakes. Or that an unscrupulous government might fake justifications for military action while withholding the original files with the tell included in them, stripping out such data when converting/editing it for broadcast..

Weirdly, the think this bring to mind is the classic Schwarzenegger movie The Running Man. Some of the tech that seemed far-fetched in that movie has already come to pass (for example, booking a flight direct from the TV, seemed amazing at the time, but I do it nearly monthly under non-pandemic situations). Spoilers below:

In that film, our hero was framed for massacring innocent people from his helicopter, who were rioting for food. The method used? Well, essentially a deep fake although the way it’s presented could just be tricky editing. But, the availability of deep fakes is confirmed later on when, unable to locate the hero who has escaped from the game arena and knowing that the public who have started to support him might be on the verge of revolt, they fake a fatal fight showing him die. Thankfully, he saves the day by storming the control room and playing the original footage that showed him try to save, not kill, the people in the riot. But, if he’d actually been killed before doing that, nobody would have known that they had been tricked again, and the dystopian future would have continued.

It’s a silly piece of entertainment that has a place in my heart for its cheesiness, but there’s definitely food for thought.

Anonymous Coward says:

Re: Re: Re: Re:

But, that doesn’t mean that people won’t, say, release a bunch of faked videos just before an election in order to rile up a certain section of the voting bloc who won’t believe the "mainstream media" when they confirm they’re fakes. Or that an unscrupulous government might fake justifications for military action while withholding the original files with the tell included in them, stripping out such data when converting/editing it for broadcast..

As if they don’t do those things by other means now.

PaulT (profile) says:

Re: Re: Re:2 Re:

Of course they do. My point is, you can put in whatever safeguards or tells that you want in order to ensure that nobody’s fooled by these for long. But, if the aim is to get an immediate response from people who get their politics from memes, that won’t matter. The damage will have been been done before the counter-proof is provided, and they’re trained to dismiss opposing evidence as "fake news" anyway.

I’m not sure what the fix would be in today’s climate, but saying that there’s nothing to worry about because detection methods will keep pace with the creation tech might be missing the reality about how some people will use these things.

JoeCool (profile) says:

Re: Re: Re: Re:

In that film, our hero was framed for massacring innocent people from his helicopter, who were rioting for food. The method used? Well, essentially a deep fake although the way it’s presented could just be tricky editing.

It was just splicing. They cut sections of his dialogue, and spliced in a fake voice on the other end of the radio. The deep fake was later when they faked his death so they could hunt him down off-screen.

Add Your Comment

Your email address will not be published. Required fields are marked *

Have a Techdirt Account? Sign in now. Want one? Register here

Comment Options:

Make this the or (get credits or sign in to see balance) what's this?

What's this?

Techdirt community members with Techdirt Credits can spotlight a comment as either the "First Word" or "Last Word" on a particular comment thread. Credits can be purchased at the Techdirt Insider Shop »

Follow Techdirt

Techdirt Daily Newsletter

Ctrl-Alt-Speech

A weekly news podcast from
Mike Masnick & Ben Whitelaw

Subscribe now to Ctrl-Alt-Speech »
Techdirt Deals
Techdirt Insider Discord
The latest chatter on the Techdirt Insider Discord channel...
Loading...