In the December Scientific
American, brain expert Christof Koch takes up the question of whether a
machine will ever manifest consciousness.
This seemingly abstruse issue may arise in actual systems sooner than we
think, and so considering it is worth doing before it happens.
Consciousness itself is one
of the leading puzzles of brain science.
Most of us can tell whether or not we are conscious, with the possible
exception of dreams in which we are sure we're awake, only to wake up and
realize we were dreaming. Consciousness
seems to be something we share with the higher animals, but defining it in a
way that satisfies either philosophers or neurologists turns out to be harder
than you might think.
Koch describes some of the
markers of human consciousness: the
ability to feel embarrrassed, for example.
This brings to mind Mark Twain's quip:
"Man is the only animal that blushes—or needs to." He clears the ground first by saying
"There is little doubt that our intelligence and our experiences are ineluctable
consequences of the natural causal powers of our brain, rather than any
supernatural ones." Then he says
that there are two competing and largely incompatible schools of thought
regarding how the human brain evokes consciousness.
One popular hypothesis,
called the global neuronal workspace theory (GNW for short), says that when
different parts of the brain concerned with disparate activities such as memory,
vision, and motor operations work together in a sort of common workspace or
connected region, the neuronal workspace itself gives rise to what we call
consciousness. For example, when you
read a sentence that reminds you of something, the workspace is devoted to your
visual field, interpreting it in terms of the ideas contained in the words, and
remembering other things that those ideas remind you of.
According to this theory,
consciousness is eminently computable, in the sense that what counts is the information-processing
part, which can be carried out just as well by a sufficiently complicated
computer simulation as by the actual "wet computer" we call the
brain. So GNW says that as soon as we
can make AI systems as complex as the human brain that work in a similar way, we
should be able to make the computer equivalent of a global neuronal workspace,
and it will be just as conscious as a human.
A more recent competing
theory throws cold water on this idea.
Called integrated information theory (IIT for short), it begins with
some axioms about the nature of consciousness, such as its unified specific
quality and the fact that consciousness is real, and not just some apparent byproduct
of brain function. It then goes on to
what is apparently a rigorous and complex mathematical analysis of any
candidate system for consciousness, and comes up with a single number called F (the Greek
capital letter pronounced "fie", I think, or "fee," depending
on who you ask). This number is
allegedly a measure of the potential of the system to be conscious. Unsurprisingly, the human brain's F rating
comes up pretty high, because it has a lot of what IIT theorists call "intrinsic
causal power." On the other hand,
the standard computer architectures used for modern computers have very low
levels of F and so have little potential for consciousness.
Why should we care? Well, for one thing, if we ever do come up
with an artificial-intelligence (AI) system that is conscious, that raises the
question of our moral obligation to it. Largely
because we think animals such as cats, dogs, cows, and chickens are conscious, we
have animal-cruelty laws that attempt to ban some of the worst things we can do
to our fellow conscious creatures. If we
start an AI program running that begins to beg and plead with pathos in its
voice that we keep away from the "off" switch, will this request
carry any moral obligation with it?
The GNW people would say
yes, because they think if it acts conscious, it is conscious, and so you'd
better treat it that way. The IIT
people, on the other hand, would say that such pleas are no more meaningful
than those of some character on a single-player video game: it's just a box of wires and transistors with
practically no F at all, and there is no moral obligation to such a
thing.
Koch says the two theories
make different and experimentally discernable predictions about certain aspects
of the brain and consciousness, so a bunch of well-funded researchers are
currently trying to find out whether GNW, IIT, or perhaps some
as-yet-unanticipated third theory is correct.
Meanwhile, we conscious but
non-specialist mortals can nevertheless speculate about such things. An environmental engineer and part-time
author named W. L. Patenaude recently sent me an entertaining and thought-provoking
novel that addresses some of these questions in a near-future setting. His A Printer's Choice deals with
questions of machine consciousness and our moral obligations to such devices,
if any. It's hard to describe his novel
without giving away some critical plot twists, but suffice it to say that
Patenaude has come up with a well-considered future scenario that seems
realistic, and draws some surprising conclusions about what may happen when
sophisticated AI is combined with what 3-D printers may be like in the future.
In the meantime, devices
such as Siri and Alexa tempt us to think of our AI systems as worthy of talking
to, if not exactly conscious. We are
used to treating as conscious any entity that we can hold a conversation with,
and that was the basis of Alan Turing's famous "imitation game" that
is now referred to as the Turing test.
But even if AI systems manage to pass Turing tests all the time, that
won't answer the question of whether they are in fact conscious, or simply
giving an excellent simulation of consciousness while knowing, understanding,
and feeling nothing. It's a question
well worth answering, but we may not have the answer for a while yet.
Sources: Christof
Koch's article "Proust Among the Machines" appeared in the December
2019 issue of Scientific American, pp. 46-49. W. L. Patenaude's A Printer's Choice (Izzard
Ink, 2018) is available on Amazon.com.
And I also referred to the Wikipedia article on integrated information
theory.
No comments:
Post a Comment