The 1994 movie “Forrest Gump”
featured authentic-looking newsreel footage of the 1960s in which President Kennedy
allegedly appeared with Tom Hanks’ fictional Gump character. Trick photography is as old as cinema, and
the only remarkable thing about such scenes were the technical care with which
they were produced. At the time, these
effects were state-of-the-art and took substantial resources of a major studio.
What only Hollywood could do
back in the 1990s is soon coming to the average hacker everywhere, thanks to
advanced artificial-intelligence (AI) deep-learning algorithms that have
recently made it possible to create extremely realistic-looking audio and video
clips that are, basically, lies. An article
in October’s Scientific American
describes both the progress that AI experts have made in reducing the amount of
labor and expertise needed to create fake recordings, and the implications that
wider availability of such technology poses for the already eroded public trust
in media. Fakes made with advanced
deep-learning AI (called “deepfakes”) can be so good that even people who
personally know the subject in question—President Obama, in one
example—couldn’t tell it was fake.
The critical issue posed in
the article is “. . . what will happen if a deepfake with significant social or
political implications goes viral”? Such
fakes could be especially harmful if released just before a major
election. It takes time and expertise to
determine whether a video or audio record has been faked, and as technology
progresses, that difficulty will only increase.
By the time a faked video that influences an election has been revealed
as a fake, the election could be over.
We faced something similar to this in 2016, as it has been conclusively
shown that Russian-based hackers spread disinformation of many kinds during the
runup to the Presidential election.
Some voters will believe
anything they see, especially if it fits in with their prejudices. But the firmer a voter is embedded in one
camp or the other, the less likely they are to change their vote based on a
single fake video. The people who can
actually change the outcome of an election are those who are undecided going
into the final stretch of the campaign.
If they are taken in by a fake video, then real harm has been done to
the process.
On the other hand, the public
as a collective body is not always as stupid as experts naively think. If a deepfake ever manages to be widely
believed at a critical moment, and the fakery is later revealed publicly, the
more thoughtful among us will will keep in mind the possibility of fakery
whenever we watch a video or listen to audio in the future. This can be likened to an immune-system
response. The first invasion of a new
pathogen into one’s body may do considerable damage, but a healthy immune
system creates antibodies that fight off both the current infection and also
any future attempts at invasion by the same pathogen.
If deepfakes begin to affect
the public conversation significantly, we will all get used to the fact that any audio or video, no matter how
genuine-looking, could be the concoction of some hacker’s imagination.
Low-tech versions of this sort
of thing happen all the time, but with lower stakes. When I’m not writing this blog, I find time
to do some lightning research, and a few years ago someone forwarded me a
YouTube clip purporting to be a security-camera video of a guy who got struck
by lightning, not once, but twice, and survived both times. I watched the grainy monochrome recording of
a man walking toward the camera on a sidewalk.
Suddenly there was a bright full-screen flash, and he’s down on the
pavement, apparently dead. Then he
raises his head, shakes himself, and groggily rises to his feet, only to have a
second flash knock him down again. I
heard from another lightning expert about this video that it was definitely
fake. Some people want so desperately to
achieve viral fame, that they will go to the trouble of setting up an elaborate
fraud like this one just on the hopes that their production will be kooky
enough to get shared widely. And in this
case, they succeeded.
Speaking theologically for a
change, some (including myself) trace the origin of lies back to the father of
lies himself, the devil, and attribute lying to the only Christian doctrine for
which there is abundant empirical evidence:
original sin. No amount of
high-tech defense is going to stop some people from lying, and if they can bend
deep-learning AI to nefarious purposes such as creating inflammatory deepfake videos,
they will. The best defense from such
scurrilous behavior is not necessarily just working harder to make
fake-video-detection technology better, although that is a good thing. It is to bear in mind that people will lie
sometimes, and to use the time-honored rules of evidence to seek the truth in
any situation. And to bear in mind
something that is often forgotten these days, that there is such a thing as
objective truth.
I think a more serious problem
than deepfake videos is the fact that in pursuit of the online dollar, social
media companies have trained millions of their customers to react to online
information with their lizard brains, going for the thing that is most
titillating and most conforming to one’s existing prejudices regardless of the
likelihood that it’s true. They have
created an invisible mob eager and willing to be set off like a dry forest at
the touch of a match. And once the
forest fire is going, it doesn’t matter if the match was real or fake.
Sources: Brooke Borel’s
article “Clicks, Lies, and Videotape” appeared on pp. 38-43 of the October 2018
issue of Scientific American.
No comments:
Post a Comment