Wade Roush is a journalist who writes a column on innovation
for the prestigious Scientific American monthly. In the June issue, he looks at the future of
increasingly smart and omni-present artificial-intelligence (AI) agents that
you can talk with—Apple's Siri, Google's Assistant, Amazon's Alexa, Microsoft's
Cortana, and so on. Apple has installed
a Siri app in its AirBuds so all you have to do is say, "Hey, Siri"
and she's right there in your ear canals.
(Full disclosure: I don't use any
of these apps, except for a dumb talk-only GPS assistant we've named Garmina.)
True to his column's marching orders, Roush came up with a
list of five protections that he says users should "insist on in
advance" before we go any farther with these smart electronic
assistants. Don't get me wrong, it's a
good list. But the chances of any of the
five taking hold or being realized in any substantial way are, in my view, way
smaller than a snowball's chances in you-know-where.
Take his first item:
privacy. Inevitably, AI
interactions are cloud-based because of the heavy-duty processing
required. Therefore, he calls for
end-to-end encryption so even the companies running the AI assistants can't
tell what's going on. This is a
contradictory requirement. Of course
they have to know what you're asking, because otherwise how are they
going to respond to requests for information?
Maybe Roush is thinking of something like the old firewall idea that
used to be maintained between the editorial and advertising divisions of a news
organization. But there are huge holes
in those walls now even in the most traditional news outlets, and I don't see
how any company could both remain ignorant of what's going on between its AI
system and the user, and have the AI system do anything useful.
The next protection he asks for is unprecedented, so I will
quote it directly: "AI providers
must be up front about how they are handling our data, how customer behavior
feeds back into improvements in the system, and how they are making money,
without burying the details in unreadable, 50-page end-user license agreements." If any of the AI-assistant firms manage to do
this, it will be the first time in recorded history. Especially the part about how they make
money. That's called a firm's business
strategy, and it's one of the most closely guarded secrets that most firms
have.
Next, he calls for every link in the communication chain to
be "hacker-proof." Good luck
with that. Hacker-resistant, I can
see. But not hacker-proof.
Next, he says the assistants must draw on "accurate data
from trusted sources." This is a
hard one. If you ask Alexa a question
like, "What do you mean, an Elbonian wants my help in transferring
millions out of his country?" what's she going to say in response? The adage "garbage in, garbage out"
still applies to AI systems just as it did to IBM System 360s in the
1960s. And if we're truly talking about artificial
intelligence, with no human intervention, I don't see how AI systems will
filter out carefully designed phishing attacks or Russian-sponsored political
tweets any better than humans do, which is to say, not very well.
And I've saved the best for last. He calls for autonomy, for AI assistants to
give us more agency over our lives:
"It would be a disaster for everyone if they morphed into vehicles
for selling us things, stealing our attention or stoking our anxieties."
Excuse me, but those three actions are how most of the Internet
works. If you took away all the activity
that was designed to sell us things, the Internet would dwindle back down to a
few academics sending scientific data back and forth, which is how it began in
the 1980s. If you tell designers not to
try stealing our attention, and turned off all the apps and sites designed to
do so, Facebook, Instagram, all the online games, Twitters, newsfeeds—all that
stuff would disappear. Facebook
designers are on public record as having said that their explicit conscious
intention in designing the system was to make using it addictive. And as for stoking our anxieties—well, that's
a good capsule description of about 80% of all the news on the Internet. Take that away, and maybe you'll have some
good stories about rainbows, butterflies, and flowers, but only till the sponsoring
companies go bankrupt for lack of business.
I have no personal animus against Mr. Roush, and in dealing
with a new technology he has to say something about it. And there's no harm in holding up an ideal
for people to approach in the future, even if they don't have much of a chance
of approaching it very closely. But it's
strange to see a supposedly savvy technology writer call for future protections
on any high-tech innovation that are so ludicrously idealistic, not to say
contradictory in some points.
Perhaps a page from the historians of technology would be
helpful here. They make a distinction
between an internalist view of history and an externalist view. I'm radically simplifying here, but
basically, an internalist (I would count Roush in that number) takes the
general assumptions of a field for granted and looks at things in a
we-can-do-this way. And in principle, if
you take the promises of smart-AI proponents at face value, we could in fact
achieve the five goals of protection that Roush outlined.
But an externalist views a situation more broadly, in the
context of what has happened before both inside and outside a given field. In saying that the protections Roush calls
for are unlikely to be realized fully, I rely on the history of how high-tech
companies and other actors have behaved up to this point, which is to fall far
short of every protection that Roush calls for, at one time or another.
I hope that this time it will be different, and talking with
your trusted invisible AI assistant will be just as worry-free as talking with
your most trusted actual human friend on the phone. But after writing that sentence, I'm not even
sure that I want that to happen. And if
it does, I think we will have lost something in the process.
Sources: Wade
Roush's column "Safe Words for Our AI Friends" appeared on p. 22 of
the June 2019 print issue of Scientific American.
No comments:
Post a Comment