Deep Fakes: Let's Not Go Off The Deep End

from the some-perspective dept

In just a few short months, “deep fakes” are striking fear in technology experts and lawmakers. Already there are legislative proposals, a law review article, national security commentaries, and dozens of opinion pieces claiming that this new deep fake technology — which uses artificial intelligence to produce realistic-looking simulated videos — will spell the end of truth in media as we know it.

But will that future come to pass?

Much of the fear of deep fakes stems from the assumption that this is a fundamentally new, game-changing technology that society has not faced before. But deep fakes are really nothing new; history is littered with deceptive practices — from Hannibal’s fake war camp to Will Rogers’ too-real impersonation of President Truman to Stalin’s disappearing of enemies from photographs. And society’s reaction to another recent technological tool of media deception — digital photo editing and Photoshop — teaches important lessons that provide insight into deep fakes’ likely impact on society.

In 1990, Adobe released the groundbreaking Adobe Photoshop to compete in the quickly-evolving digital photograph editing market. This technology, and myriad competitors that failed to reach the eventual popularity of Photoshop, allowed the user to digitally alter real photographs uploaded into the program. While competing services needed some expertise to use, Adobe designed Photoshop to be user-friendly and accessible to anyone with a Macintosh computer.

With the new capabilities came new concerns. That same year, Newsweek published an article called, “When Photographs Lie.” As Newsweek predicted, the consequences of this rise in photographic manipulation techniques could be disastrous: “Take China’s leaders, who last year tried to bar photographers from exposing [the leaders’] lies about the Beijing massacre. In the future, the Chinese or others with something to hide wouldn’t even worry about photographers.”

These concerns were not entirely without merit. Fred Ritchin, formerly the picture editor of The New York Times Magazine who is now the Dean Emeritus of the International Center of Photography School, has continued to argue that trust in photography has eroded over the past few decades thanks to photo-editing technology:

There used to be a time when one could show people a photograph and the image would have the weight of evidence—the “camera never lies.” Certainly photography always lied, but as a quotation from appearances it was something viewers counted on to reveal certain truths. The photographer’s role was pivotal, but constricted: for decades the mechanics of the photographic process were generally considered a guarantee of credibility more reliable than the photographer’s own authorship. But this is no longer the case.

It is true that the “camera never lies” saying can no longer be sustained — the camera can and often does lie when the final product has been manipulated. Yet the crisis of truth that Ritchin and Newsweek predicted has not come to pass.

Why? Because society caught on and adapted to the technology.

Think back to June 1994, when Time magazine ran O.J. Simpson’s mugshot on the cover of its monthly issue. Time had drastically darkened the mugshot, making Simpson appear much darker than he actually was. What’s worse, Newsweek ran the unedited version of the mugshot, and the two magazines sat side-by-side on supermarket shelves. While Time defended this as an artistic choice with no intended racial implications, the obviously edited photograph triggered massive public outcry.

Bad fakes were only part of the growing public awareness of photographic manipulation. For years, fashion magazines have employed deceptive techniques to alter the appearance of cover models. Magazines with more attractive models on the cover generally sell more copies than those featuring less attractive ones, so editors retouch photos to make them more appealing to the public. Unfortunately, this practice created an unrealistic image of beauty in society and, once this was discovered, health organizations began publically warning about the dangers this phenomenon caused — most notably eating disorders. And due to the ensuing public outcry, families across the country became aware of photo-editing technology and what it was capable of.

Does societal adaptation mean that no one falls for photo manipulation anymore? Of course not. But instead of prompting the death of truth in photography, awareness of the new technology has encouraged people to use other indicators — such as trustworthiness of the source — to make informed decisions about whether an image presented is authentic. And as a result, news outlets and other publishers of photographs have gone on to establish policies and make decisions regarding the images they use with an eye toward fostering their audience’s trust. For example, in 2003, the Los Angeles Times quickly fired a reporter who had digitally altered Iraq War photographs because the editors realized that publishing a manipulated image would diminish their reader’s perception of the paper’s veracity.

No major regulation or legislation was needed to prevent the apocalyptic vision of Photoshop’s future; society adapted on its own.

Now, however, the same “death of truth” claims — mainly in the context of fake news and disinformation — ring out in response to deep fakes as new artificial-intelligence and machine-learning technology enter the market. What if someone released a deep fake of a politician appearing to take a bribe right before an election? Or of the president of the United States announcing an imminent missile strike? As Andrew Grotto, International Security Fellow at the Center for International Security and Cooperation at Stanford University, predicts, “This technology … will be irresistible for nation states to use in disinformation campaigns to manipulate public opinion, deceive populations and undermine confidence in our institutions.” Perhaps even more problematic, if society has no means to distinguish a fake video from a real one, any person could have plausible deniability for anything they do or say on film: It’s all fake news.

But who is to say that societal response to deep fakes will not evolve similarly to the response to digitally edited photographs?

Right now, deep fake technology is far from flawless. While some fakes may appear incredibly realistic, others have glaring imperfections that can alert the viewer to their forged nature. As with Photoshop and digital photograph editing before it, poorly made fakes generated through cellphone applications can educate viewers about the existence of this technology. When the public becomes aware, the harms posed by deep fakes will fail to materialize to the extent predicted.

Indeed, new controversies surrounding the use of this technology are likewise increasing public awareness about what the technology can do. For example, the term “deep fake” actually comes from a Reddit user who began using this technology to generate realistic-looking fake pornographic videos of celebrities. This type of content rightfully sparked outrage as an invasion of the depicted person’s privacy rights. As public outcry began to ramp up, the platform publically banned the deep fake community and any involuntary pornography from its website. As with the public outcry that stemmed from the use of Photoshop to create an unrealistic body image, the use of deep fake technology to create inappropriate and outright appalling content will, in turn, make the public more aware of the technology, potentially stemming harms.

Perhaps most importantly, many policymakers and private companies have already begun taking steps to educate the public about the existence and capabilities of deep fakes. Notable lawmakers such as Sens. Mark Warner of Virginia, and Ben Sasse of Nebraska, have recently made deep fakes a major talking point. Buzzfeed released a public service announcement from “President Obama,” which was in fact a deep fake video with a voice-over from Jordan Peele, to raise awareness of the technology. And Facebook recently announced that it is investing significant resources into deep fake identification and detection. With so much focus on educating the public about the existence and uses of this technology, it will be more difficult for bad actors to successfully spread harmful deep fake videos.

That is not to say deep fakes do not pose any new harms or threats. Unlike Photoshop, anyone with a smartphone can use deep fake technology, meaning that a larger number of deep fakes may be produced and shared. And unlike during the 1990s, significantly more people use the internet to share news and information today, facilitating the dissemination of content across the globe at breakneck speeds.

However, we should not assume that society will fall into an abyss of deception and disinformation if we do not take steps to regulate the technology. There are many significant benefits that the technology can provide, such as aging photos of children missing for decades or creating lifelike versions of historical figures for children in class. Instead of rushing to draft legislation, lawmakers should look to the past and realize that deep fakes are not some unprecedented problem. Instead, deep fakes simply represent the newest technique in a long line of deceptive audiovisual practices that have been used throughout history. So long as we understand this fact, we can be confident that society will come up with ways of mitigating new harms or threats from deep fakes on its own.

Jeffrey Westling is a Technology and Innovation policy associate at the R Street Institute, a free-market think tank based in Washington, D.C.

Filed Under: , , , , ,

Rate this comment as insightful
Rate this comment as funny
You have rated this comment as insightful
You have rated this comment as funny
Flag this comment as abusive/trolling/spam
You have flagged this comment
The first word has already been claimed
The last word has already been claimed
Insightful Lightbulb icon Funny Laughing icon Abusive/trolling/spam Flag icon Insightful badge Lightbulb icon Funny badge Laughing icon Comments icon

Comments on “Deep Fakes: Let's Not Go Off The Deep End”

Subscribe: RSS Leave a comment
32 Comments
Mason Wheeler (profile) says:

Now, however, the same “death of truth” claims — mainly in the context of fake news and disinformation — ring out in response to deep fakes as new artificial-intelligence and machine-learning technology enter the market. What if someone released a deep fake of a politician appearing to take a bribe right before an election? Or of the president of the United States announcing an imminent missile strike?

Or the converse: what if someone released a legitimate video of something like that, and the corrupt politician was able to deflect it by claiming it’s just another deep fake?

I leave as an exercise to the reader to determine which of the two is more plausible, and which is more disturbing.

ECA (profile) says:

Not enough..

This author isnt old enough..
He hasnt seen all the Old fake photo’s and Articles created ove the years..
They started lots in WWI and WWII, to fake out the other sides, but it has carried over into Everything, including Commercials..

WE dont need AI to fake anything.. Not even computers.
https://qz.com/911990/the-cottingley-fairy-hoax-of-1917-is-a-case-study-in-how-smart-people-lose-control-of-the-truth/

https://www.telegraph.co.uk/news/uknews/8679113/Five-famous-hoaxes-which-fooled-the-world.html

http://ichef.bbci.co.uk/wwfeatures/wm/live/976_549/images/live/p0/57/98/p057983q.jpg

http://www.bbc.com/future/story/20170629-the-hidden-signs-that-can-reveal-if-a-photo-is-fake

Fake Pictures and News articles have been around along time..

Anonymous Coward says:

For me, the disturbing presence of AI isn’t with the deep fakes; eventually they’ll get this tech so that it looks less fake than real footage.

For me the place that AI is disturbing is in social analysis. It’s not about being able to create a deep fake, it’s about being able to know exactly when to release such a thing, who to release it to, and how to spin it to create the societal results you desire.

This is an area that humans aren’t good at; we can only go so many layers deep and can only hold the narrative together for so long. But AI can keep its message consistent while manipulating the human players toward a desired result.

THIS is a problem that I see looming in our future and one that has no easy answer.

Anonymous Coward says:

Re: Re:

And more importantly, who not to release it to. Mass disinformation is vulnerable because it reaches the ears of those with strong, competing narratives, who can then make the next move based on it. However, if the disinformation is selectively targeted at those considered vulnerable to it while avoiding purveyors of counternarratives (such as media elements or researchers), then it can spread far further in the shadows while being far harder to research, pin down, and discredit.

Anonymous Coward says:

I think you underestimate the antiquity of fake news. What’s carved in stone by one Pharoah could be–and sometimes was–
re-carved by the next imperial chiseler. The Assyrian emperor Sennacherib is notorious for the inflation of enemy body counts in later carved-in-stone accounts of the same battle, although he didn’t go back to update the earlier steles. I bet someone who knows something about Chinese history (which is to say, someone other than I) could add a citation or two.

Qwertygiy says:

Re:

I believe this is the perfect example of what you are looking for: https://en.wikipedia.org/wiki/Burning_of_books_and_burying_of_scholars

"The burning of books and burying of scholars […] refers to the supposed burning of texts in 213 BCE and live burial of 460 Confucian scholars in 212 BCE by the First Emperor of the Qin dynasty of Imperial China."

"Modern scholars doubt the details of the […] main source since the author, Sima Qian, wrote a century or so after the events and was an official of the Han dynasty, which could be expected to portray the previous rulers unfavorably."

"While it is clear that the First Emperor gathered and destroyed many works which he regarded as subversive, two copies of each school were to be preserved in imperial libraries. These were destroyed in the fighting following the fall of the dynasty."

"Martin Kern adds that Qin and early Han writings frequently cite the Classics, especially the Documents and the Classic of Poetry, which would not have been possible if they had been burned, as reported."

"Sima Qian’s account of the execution of the scholars [in the text known as the Shiji] has similar difficulties. First, no text earlier than the Shiji mentions the executions, the Shiji mentions no Confucian scholar by name as a victim of the executions, and in fact, no other text mentions the executions at all until the 1st century AD. The earliest known use of the famous phrase "burning the books and executing the Confucians" is not noted until the early 4th century."

Gary (profile) says:

Death of Truth

We have been dealing with variations on this since forever. What I don’t understand is how the Tumpetors can write off any lies made by the Cheeto as a false narrative – despite (or because of) multiple organizations wholeheartedly fact checking his words.
“8000 lies since elected? According to the biased media! I don’t have time to refute all of that.” I hear that every day, turns my stomach.

Agammamon says:

Re: Death of Truth

The same way Obama, Clinton, AOC, McCain, Bush, etc supporters wrote off the lies of ‘their guy’ despite the fact-checking.

Don’t let partisanship blind you to the fact that the Cheeto-In-Chief is only the most obnoxious example of behavior that has been standard practice in the US government for longer than even our oldest commenters have been alive.

Gary (profile) says:

Re: Re: Re: Death of Truth

I enjoy calling out the lies of any politicians I can find. It irritates me more than their crappy policies.
The Cheeto’s are bad lies, and I get attacked by Trumpeters as they simultaneously deny the lie, and claim it doesn’t matter because he’s so good at pussy grabbing.
WaPo and the NYT may have an axe to grind with the cheesy one – but they use facts to make their digs not bald falsehoods.

Qwertygiy says:

Re: Death of Truth

It’s partly because it’s very hard to find a truly neutral, trustworthy source anymore. Even articles that tell no lies are often slanted, including facts that support their general position, and excluding facts that don’t support their general position.

I’ve even had to stop to double-check Reuters a few times, a source I consider among the most reliable mainstream news available, because some articles have seemed to put more emphasis on Democratic positives and Republican negatives than is necessary.

When looking up an article about a toddler that got hold of a gun several months ago, CNN’s article was one paragraph of summary and two paragraphs of shocked quotes from a neighbor, and NBC’s had only a few extra sentences than that, whereas Fox News had much more facts about the case, fewer quotes from that neighbor and a few small quotes from other neighbors, more lengthy quotes from the police who responded, and mentioned specific previous incidents similar to it.

Unfortunately, I believe social media is often to blame. "Short, simple, and fast" is more profitable than "complete, balanced, and fact-checked". Have to be the first one to tweet it to get that ad money.

Anonymous Coward says:

I Disagree Wholly, Deep Fakes Are A Terrifying Medium For Lies

Recorded video footage has been synonymous with truth for so long now I can’t imagine a world where people will be willing to be skeptical. We live in a world where a large amount of people think pro wrestling is a live depiction of real events. We live in a world where people think Stuart Little was a real mouse who someone dubbed over while it ate peanut butter.

Video broadcasts are a sacred well of truth to a lot of people, this won’t be an easy thing to adapt to (and so many people will resist it). It won’t be impossible to educate the public, but it will be slow and there will be a lot of damage done.

Qwertygiy says:

Re: I Disagree Wholly, Deep Fakes Are A Terrifying Medium For Li

"We live in a world where people think Stuart Little was a real mouse who someone dubbed over while it ate peanut butter."

This doesn’t speak of our need to prevent the advancement of technology.

This speaks of our need to fix up our education system so that people know enough about reality to stop and think, "hey, mice don’t have hands, and even trained mice can’t walk upright, or drive cars, or give finger guns, or fly planes."

Teach people what is possible, and what can be faked. Teach them to find, examine, and evaluate evidence. And they’ll learn to look for what might have been faked.

After all, nobody needed photoshop or deep fakes to start or maintain the antivax movement. Nobody needed photoshop or deep fakes to create the Cottingley Fairies. Nobody needed photoshop or deep fakes to convince the Russians that Rasputin was a wizard bewitching the Queen. The better educated someone is about what can be regarded as trustworthy and what isn’t, the better they are equipped to avoid falling for fakes of any kind.

Bamboo Harvester (profile) says:

Re: I Disagree Wholly, Deep Fakes Are A Terrifying Medium For Li

I’m not so sure about that. Video has been faked for a long time now, the tech has just gotten better.

Anyone else remember the video of the flame-throwing tank at Waco?

Hillary crossing the tarmac under mortar fire?

In the Waco vid, when the "real" footage was released, it was lower quality than the "faked" vid, which had a lot of people wondering WHICH of them was the fake.

We’re at a point now in video editing capability that I can easily see that cynicism becoming the new norm.

Anonymous Coward says:

it seems the concern is similar to that over color copiers and the ability to counterfeit money reching a certain tipping point that makes it too easy to fool people by doing something a little better than it had ever been done before.

Why am I supposed to care about any of this? Since I’m being told what to care about and how to think, perhaps someone could tell me WHY.

Qwertygiy says:

Re: Re:

The comparison to color copiers is an interesting one, worth exploring, I believe. While the situation isn’t quite the same, there are a lot of similar problems that need to be dealt with.

  • How easy was it to fake something before this technology? Before color copiers, you would need a physical printing press of some sort in order to counterfeit money. It’s not easily automated, requires physically carving the dies, and leaves behind a hefty amount of evidence of what you’ve done, whereas a color copier can scan and go instantly, often with no evidence that it was used for the act.

Before deep fakes, you had many options. One, the traditional method, seen on SNL: actors and costumes and sets. Time-consuming, expensive, and you need certain people available. Two, use greenscreens to combine actors with another video. Less expensive, still has the people problem. Three, edit an existing video or audio manually. Can be done for free with any computer, but very time consuming to make it believable. Four, generate or alter a video with CGI. The good stuff is expensive, all of it is time-consuming. Deep fakes are practically instant, and all you need is a phone.

  • How close are the fakes to reality before and after this technology?

With copiers, it’s actually step backwards. Money is printed on special blends of paper or even plastic in some nations. Your standard copiers aren’t intended for those materials, which are often difficult and illegal to reproduce. You also can’t use them to create any physical engraving — it all prints at the same depth.

With deep fakes, however, you can produce fairly convincing fakes in minutes or seconds, which would probably take either days or a whole team if using other procedures.

  • How can these new fakes be detected?

Money has a wide variety of anti-counterfeit measures that can be checked with the naked eye. Reflective portions that change color at different angles, holographic portions that only show up when backlit, variances in texture and material. Other methods such as the presence of specific dyes or fabrics, including ones left only by printers, can be checked with cheaply available materials like that everpresent yellow marker, or an infrared light. And in the long-run, serial numbers can be used to track or discredit specific bills. Additionally, commercial scanners and printers often include software that recognizes and rejects any money minted since 1996, due to the presence of the EURion constellation (or Omron rings), and other, unpublicized watermarks.

Deep fakes, on the other hand, don’t presently have any "smoking gun" features to distinguish them from legitimate videos, nor would it be feasible. Only governments are authorized to create genuine money, but there are no limits on who can produce a video file. This does not mean they’re impossible to detect; rather, it’s more in the nature of a forged signature than of counterfeit money. The presence of an earlier video, alternate sources of the event, or other contradicting evidence (such as alibis) can be used to discredit it, as well as any number of inconsistencies in the video itself — everything from mismatched pixelation to the Uncanny Valley.

  • What legitimate uses does the technology have which are an improvement over other options?

Copiers and scanners have a huge variety of advantages over old printing presses. The ability to print any design on demand rather than creating and arranging dies, the ability to instantly make a duplicate document at a comparable quality, rather than relying on standard photography or recreation, and the ease of use — good luck teaching grandma how to operate her personal Gutenberg.

Deep fake technology also has a wide variety of legitimate uses. After all, we’re essentially just talking about an advancement of existing motion capture. It can be used to enhance video games and other forms of alternate reality by translating actions taken by users in the real world to actions taken by characters in the virtual world. It can be used to assist in identifying suspects or missing persons. It can be used to accurately recreate a video, photograph, or written account of an event for theatrical or investigative purposes.

Anonymous Coward says:

It's inevitable

We will eventually have technology capable of producing flawlessly realistic fake photo and video. Technology marches on. Might as well be outraged at the sun rising in the morning, it will change nothing. Governments can outlaw the legitimate uses of the technology to some extent, but those seeking to harm and deceive people with it will do so anyway, to include the governments themselves.

And the people will catch on, eventually. Our grandchildren will laugh at someone having believed things that, to them, will seem obviously fake. Just as people of our time laugh at things like people having mistaken Orson Welles’ War of the Worlds radio drama for a real alien invasion.

The end of truth as we know it is the beginning of a new truth we don’t yet know, but should seek to learn. For refusing to learn will not make the truth untrue or vice versa.

Ninja (profile) says:

Again it boils down to critical thinking. Even if I see a deep fake video of Obama praising Hitler and such video is masterfully made one will question if either the video or the audio was manufactured. Of course this is a more glaring example but my point is: if we are educated very early to think critically there may be damage to some extent but it will be controllable.

We’ll need to make it the norm to watch for followups on more disturbing news and show people it’s honorable to admit to sharing some fake and apologizing for it. You know, make retractions spread like wild fire much like the fakes that generate such retraction.

My take on it: we will be extinct due to stupid before we are able to rein it in. But maybe I’m wrong.

crade (profile) says:

The concern as I see it isn’t that technology is making a new possibility, it’s that technology is making it too easy to create and too difficult to detect. Sure we maybe aren’t quite there yet, but that seems to be the direction we are going.
The big problems come when deep fakes become so ubiquitous that it just plain isn’t feasible to figure out the truth anymore.

Anonymous Coward says:

For a long time we’ve been saying that witness accounts are unreliable and video evidence is the way. "You say you saw him pull a gun, but your bodycam says otherwise." I wonder, if deepfake technology becomes ubiquitous, if it’ll go the other way; "The video shows he had a gun but we all know video can be faked. What did you see?"

Add Your Comment

Your email address will not be published. Required fields are marked *

Have a Techdirt Account? Sign in now. Want one? Register here

Comment Options:

Make this the or (get credits or sign in to see balance) what's this?

What's this?

Techdirt community members with Techdirt Credits can spotlight a comment as either the "First Word" or "Last Word" on a particular comment thread. Credits can be purchased at the Techdirt Insider Shop »

Follow Techdirt

Techdirt Daily Newsletter

Ctrl-Alt-Speech

A weekly news podcast from
Mike Masnick & Ben Whitelaw

Subscribe now to Ctrl-Alt-Speech »
Techdirt Deals
Techdirt Insider Discord
The latest chatter on the Techdirt Insider Discord channel...
Loading...