First it was chess: world champion Garry Kasparov lost a contest of five games
to an IBM computer named Deep Blue in 1997. And now it's the game called Go, which has been
popular in Asia for centuries.
Earlier this month, Korean Go champion Lee Sedol lost four out of a
series of five games in a match with AlphaGo, a computer program developed by
Google-owned London firm DeepMind.
But Sedol says he now has a whole new view of the game and is a much
better player from the experience.
This development raises some perennial questions about what makes people
human and whether machines will in fact take over the world once they get
smarter than us.
As reported in Wired, the Go match between Lee Sedol and AlphaGo was carried on
live TV and watched by millions of Go enthusiasts. For those not familiar with Go (which includes yours truly),
it is a superficially simple game played on a 19-by-19 grid of lines with black
and white stones, sort of like an expanded checkerboard. But the rules are both more complicated
and simpler than checkers. They
are simpler in that the goal is just to encircle more territory with your
stones than your opponent encircles with his. They are more complicated in that there are vastly more
possible moves in Go than there are in checkers or even chess, so strategizing
takes at least as much brainpower in Go as it does in chess.
It's encouraging to note that even when Sedol
lost to the machine, he could come up with moves that equalled the machine's
moves in subtlety and surprise. Of
course, this may not be the case for much longer. It seems like once software developers show they can beat
humans at a given complex task, they lose interest and move on to something
else. And this shows an aspect of
the situation that so far, few have commented on: the fact that if you go far enough back in the history of
AlphaGo, you find not more machines, but humans.
It was humans who figured out the best
strategies to use for AlphaGo's design, which involved making a lot of slightly
different AlphaGos and having them play against each other and learn from their
experiences. Yes, in that sense
the computer was teaching itself, but it didn't start from scratch. The whole learning environment and the
existence of the program in the first place was due, not to other machines, but
to human beings.
This gets to one of the main problems I have
with artificial-intelligence (AI) proponents who see as inevitable a day when
non-biological, non-human entities will, in short, take over. Proponents of what is called
transhumanism, such as inventor and author Ray Kurzweil, call this day the
Singularity, because they think it will mark the beginning of a kind of
explosion of intelligence that will make all of human history look like mudpies
by comparison. They point to machines
like DeepBlue and AlphaGo as precursors of what we should expect machines to be
capable of in every phase of life, not just specialized rule-bound activities
like chess and Go.
But while the transhumanists may be right in
certain details, I think there is an oversimplified aspect to their concept of
the singularity which is often overlooked. The mathematical notion of a singularity is that it's a
point where the rules break down.
True, you don't know what's going on at the singularity point itself,
but you can handle singularities in mathematics and even physics as long as
you're not standing right at the point and asking questions about it. I teach an electrical engineering
course in which we routinely deal with mathematical singularities called poles. As long as the circuit conditions stay
away from the poles, everything is fine.
The circuit is perfectly comprehensible despite the presence of poles,
and performs its functions in accordance with the human-directed goals set out
for it.
All I'm seeing in artificial intelligence tells
me that people are still in control of the machines. For the opposite to be the case—for machines to be superior
to people in the same sense that people are now superior to machines—we'd have
to see something like the following.
The only way new people would come into being is when the machines
decide to make one, designing the DNA from scratch and growing and training the
totally-designed person for a specific task. This implies that first, the old-fashioned way of making
people would be eliminated, and second, that people would have allowed this
elimination to take place.
Neither of these eventualities strikes me as at
all likely, at least as a deliberate decision made by human beings. I will admit to being troubled by the
degree to which human interactions are increasingly mediated by opaque
computer-network-intensive means.
If people end up interacting primarily or exclusively through
AI-controlled systems, the system has an excellent opportunity to manipulate
people to their disadvantage, and to the advantage of the system, or whoever is
in charge of the system.
But so far, all the giant AI-inspired systems
are all firmly under the control of human beings, not machines. No computer has ever applied for the
position of CEO of a company, and if it did, it would probably get crossways to
its board of directors in the first few days and get fired anyway. As far as I can tell, we are still in
the regime of Man exerting control over Nature, not Artifice exerting control
over Man. And as C. S. Lewis wrote
in 1947, ". . . what we call Man's power over Nature turns out to be a
power exercised by some men over other men with Nature as its
instrument."
I think it is significant that AlphaGo beat Lee
Sedol, but I'm not going to start worrying that some computerized totalitarian
government is going to take over the world any time soon. Because whatever window-dressing the
transhumanists put on their Singularity, that is what it would have to be in
practice: an enslavement of
humanity, not a liberation. And as long as enough people remember that humans
are not machines, and machines are made by, and should be controlled by,
humans, I think we don't have to lose a lot of sleep about machines taking over
the world. What we should watch
are the humans running the machines.
Sources: The match between
Lee Sedol and AlphaGo was described by Cade Metz in Wired at http://www.wired.com/2016/03/two-moves-alphago-lee-sedol-redefined-future/. I also referred to the Wikipedia
articles on DeepBlue, Go, and AlphaGo.
The quotation from The Abolition
of Man by C. S. Lewis is from the Macmillan paperback edition of 1955, p.
69.
No comments:
Post a Comment