Robots build cars, clean carpets, and answer
phones, but would you trust one to decide how you should be treated in a rest
home or a hospital? That's one of
the questions raised recently by a thoughtful article in the online business
news journal Quartz. Journalist Olivia Goldhill interviewed
ethicists and computer scientists who are thinking about and working on plans
to enable computers and robots to make moral decisions. To some people, this smacks of robots
taking over the world. Before you
get out the torches and pitchforks, however, let me summarize what the
researchers are trying to do.
Some of the projects are nothing more than a
type of expert system, a decision-making aid that has already found wide
usefulness in professions such as medicine, engineering, and law. For example, the subject of
international law can be mind-numbingly complicated. Researchers at the Georgia Institute of Technology are
trying to develop machines that will ensure compliance with international law
by programming in all the relevant codes (in the law sense) so that the coding
(in the computer-science sense) will lead to decisions or outcomes that
automatically comply with the pertinent statutes. This amounts to a sort of robotic legal assistant with
flawless recall, but one that doesn't make final decisions on its own. That would be left to a human lawyer,
presumably.
Things are a little different with a project
that a philosopher Susan Anderson and her computer-scientist husband Michael
Anderson are working on: a program
that advises healthcare workers caring for elderly patients. Instead of programming in explicit
moral rules, they teach the machine by example. The researchers take a few problem cases and let the machine
know what they would do, and after that the machine can deal with similar
problems. So far it's all a hypothetical
academic exercise, but in Japan, where one out of every five residents is over
65, robotic eldercare is a booming business. It's just a matter of time until someone installs a
moral-decision program like the one the Andersons are developing in a robot
that may be left on its own with an old geezer, such as the writer of this
blog, for example.
What the Quartz article didn't address directly
is the question of moral authority.
And here is where we can find some matters for genuine concern.
Many of the researchers working on aspects of
robot morality evinced frustration that human morality is not, and may never
be, reducible to the kind of algorithms that computers can execute. Everybody who has thought about the
question realizes that morality isn't as simple and straightforward as playing
tick-tack-toe. Even the most
respected human moral reasoners will often disagree about the best decision in
a given ethical situation. But
this isn't the fundamental problem in implementing moral reasoning in robots.
Even if we could come up with robots who could
write brilliant Supreme Court decisions, there would be a basic problem with
putting black robes on a robot and seating it on the bench. As most people will still agree, there
is a fundamental difference in kind between humans and robots. To avoid getting into deep philosophical
waters at this point, I will simply say that it's a question of authority. Authority, in the sense I'm using it,
can only vest in human beings. So
while robots and computers might be excellent moral advisers to humans, by the nature of the case it must be humans who
will always have moral authority and who make moral decisions.
If someone installs a moral-reasoning robot in
a rest home and lets it loose with the patients, you might claim that the robot
has authority in the situation.
But if you start thinking like a civil trial lawyer and ask who is
ultimately responsible for the actions of the robot, you will realize that if
anything goes seriously wrong, the cops aren't going to haul the robot off to
jail. No, they will come after the
robot's operators and owners and programmers—the human beings, in other words,
who installed the robot as their tool, but who are still morally responsible
for its actions.
People can try to abdicate moral responsibility
to machines, but that doesn't make them any less responsible. For example, take the practice of using
computerized credit-rating systems in making consumer loans. My father was a loan officer at a bank
in the 1960s before such credit-rating systems came into widespread use. He used references, such bank records
as he had access to, and his own gut feelings about a potential customer to
decide whether to make a loan.
Today, most loan officers have to take a customer's computer-generated
numerical credit rating into account, and the job of making a loan is sometimes
basically a complicated algorithm that could almost be executed by a
computer.
But automation did not stop the banking
industry from running over a cliff during the housing crash of 2007. Nobody blamed computers alone for that
debacle—it was the people who believed in their computer forecasts and complex computerized
financial instruments who led the charge, and who bear the responsibility. The point is that computers and their
outputs are only tools. Turning
one's entire decision-making process over to a machine does not mean that the
machine has moral authority. It
means that you and the machine's makers now share whatever moral authority
remains in the situation, which may not be much.
I say not much may remain of moral authority,
because moral authority can be destroyed.
When Adolf Hitler came to power, he supplanted the established German
judicial system of courts with special "political courts" that were
empowered to countermand verdicts of the regular judges. While the political courts had power up
to and including issuing death sentences, history has shown that they had
little or no moral authority, because they were corrupt accessories to Hitler's
debauched regime.
As Anglican priest Victor Austin shows in his
book Up With Authority, authority
inheres only in persons. While we
may speak colloquially about the authority of the law or the authority of a
book, it is a live lawyer or expert who actually makes moral decisions where
moral authority is called for.
Patrick Lin, one of the ethics authorities cited in the Quartz article, realizes this and says
that robot ethics is really just an exercise in looking at our own ethical
attitudes in the mirror of robotics, so to speak. And in saying this, he shows that the dream of relieving
ourselves of ethical responsibility by handing over difficult ethical decisions
to robots is just that—a dream.
Sources: The Quartz
article "Can We Trust Robots To Make Moral Decisions?" by Olivia
Goldhill appeared on Apr. 3, 2016 at http://qz.com/653575/can-we-trust-robots-to-make-moral-decisions/. (I thank my wife for pointing it out to
me.) The statistic about the
number of aged people in Japan is from http://www.techinsider.io/japan-developing-carebots-for-elderly-care-2015-11,
and my information about Hitler's political courts appears on the website of
the Holocaust Memorial Museum at https://www.ushmm.org/wlc/en/article.php?ModuleId=10005467. Victor Lee Austin's Up With Authority was published in 2010
by T&T Clark International.
No comments:
Post a Comment