If you're reading this, you're conscious of reading
it. Consciousness is something
most of us experience every day, but for philosophers, it has proved to be a
tough nut to crack. What is it,
exactly? And more relevant for
engineers, can machines—specifically, artificially intelligent computers—be
conscious?
Until recently, questions like this came up only in
obscure academic journals and science fiction stories. But now that personal digital assistant
devices like Siri are enjoying widespread use, the issue has fresh relevance
both for consumers and for those developing new AI (artificial intelligence)
systems.
Philosophers of mind such as David Chalmers point out
that one of the more difficult problems relating to consciousness is explaining
the nature of experiences. Take
the color red, for example. Yes,
you can point to a range of wavelengths in the visible-light spectrum that most
people will call "red."
But the redness of red isn't just a certain wavelength range. A five-year-old child who knows his
colors can recognize red, but unless he's unusual he knows nothing about light
physics and wavelengths. Yet when
he sees something red, he is conscious of seeing something red.
One popular school of thought about the nature of
consciousness is the "functionalist" school. These people treat a candidate for
consciousness as a black box and imagine having a conversation with it. If its answers convince you that you're
talking with a conscious being, well, that's as much evidence as you're going
to get. By this measure, some
people probably already think Siri is conscious.
Now along comes a neuroscientist named Giulio Tononi,
who has been working on something he calls "integrated information
theory" or IIT. It has little
to do with the kind of information theory familiar to electrical
engineers. Instead, it is a formal
mathematical theory that starts from some axioms that most people would agree
on concerning the nature of consciousness. Unfortunately, it's pretty complicated and I can't go into
the details here. But starting
from these axioms, he works out postulates and winds up with a list of
characteristics that any physical system capable of supporting consciousness
should have. The results, to say
the least, are surprising.
For one thing, he says that while current AI systems
that are implemented using standard stored-program computers can give a good impression of conscious behavior, IIT
shows that their structure is incapable of supporting consciousness. That is, if it walks like it's
conscious and quacks like it's conscious, it isn't necessarily conscious. So even if Siri manages to convince all
its users that it's conscious, Tononi would say it's just a clever trick.
How can this happen? Well, philosopher John Searle's "Chinese room"
argument may help in this regard.
Suppose a man who knows no Chinese is nevertheless in a room with a
computer library of every conceivable question one can ask in Chinese, along
with the appropriate answers that will convince a Chinese interrogator outside
the room that the entity inside the room is conscious. All the man in the room does is take
the Chinese questions slipped under the door, use his computer to look up the
answers, and send the answers (in Chinese) back to the Chinese questioner on
the other side of the door. To the
questioner, it looks like there's somebody who is conscious inside the
room. But a reference library
can't be conscious, even if it's computerized, and the only candidate for
consciousness inside the room—the man using the computer—can't read Chinese,
and so he isn't conscious of the interchange either. According to Tononi, every AI program running on a
conventionally designed computer is just like the man in the Chinese room—maybe
it looks conscious from the outside, but its structure keeps it from ever being
conscious.
On the other hand, Tononi says that the human
brain—specifically the cerebral cortex—has just the kind of interconnections
and ability to change its own form that is needed to realize
consciousness. That's good news,
certainly, but along with that reassurance comes a more profound implication of
IIT: the possibility of making
machines whose consciousness would not only be evident to those outside, but
could be proven mathematically.
Here we get into some really deep waters. IIT is by no means universally accepted
in the neuroscience community. As
one might expect, it's rather unpopular among AI workers who either think
consciousness is an illusion, or that brains and computers are basically the
same thing and consciousness is just a matter of degree rather than a
difference in kind.
But suppose that Tononi's theory is basically
correct, and we get to the point where we can take a look at a given physical
system, whether it's a brain, a computer, or some as-yet-uninvented future
artifact, and measure its potential to be conscious rather like you can measure
a computer's clock speed today. In
an article co-written with Christof Koch in the June 2017 IEEE Spectrum, Tononi concludes that "Such a neuromorphic
machine, if highly conscious, would then have intrinsic rights, in particular
the right to its own life and well-being.
In that case, society would have to learn to share the world with its
own creations."
In a sense, we've been doing exactly that all
along—ask any new parent how it's going.
But Tononi's "creation" isn't another human—it would be some
kind of machine, broadly speaking, whose consciousness would be verified by
IIT. There has been talk about
robot rights for some years, fortunately so far entirely on the hypothetical
level. But if Tononi's theory
comes to be more widely accepted and turns out to do what he claims it will do,
we may some day face the question of how to treat entities (I can't think of
another word) that seem to be as alive as you or me, but depend for their
"lives" on Pacific Gas and Electric, not the grocery store.
Well, I don't have a good answer to that one, except
that we're a long way from that consummation. People are trying to design intelligent computers that are
actually built the way the brain is built, but they're way behind the usual AI
approach of programming and simulating neural networks on regular computer
hardware. If Tononi is right, the
conventional AI approach leads only to what I was pretty sure was the case all
along—a fancy adding machine that can talk and act like a person, but is in
fact just a bunch of hardware. But
if we ever build a machine that not only acts conscious, but is conscious according to IIT, well,
let's worry about that when it happens.
Sources: Christof Koch and Giulio Tononi's
article "Can We Quantify Machine Consciousness?" appeared on pp.
65-69 of the June 2017 issue of IEEE
Spectrum, and is also available online at http://spectrum.ieee.org/computing/hardware/can-we-quantify-machine-consciousness. I also referred to the Wikipedia
article on integrated information theory and the Scholarpedia article at http://www.scholarpedia.org/article/Integrated_information_theory.
No comments:
Post a Comment