Earlier this month, Canadian science-fiction writer Bob Sawyer attracted a lot of attention with an editorial he wrote for a special robotics issue of the prestigious research journal Science. In his piece, Sawyer showed that writers of science fiction have been exploring the relationship between humans and robots at least since the early stories of Isaac Asimov in the 1940s. But far from coming up with a tidy solution to the moral implications of autonomous, seemingly intelligent machines, the sci-fi crowd appears to have concentrated on the dismal downsides of what could go wrong with robots despite the best intentions of humans to make them safe and obedient. Think Frankenstein, only with Energizer-Bunny endurance and superhuman powers.
Nevertheless, Sawyer is an optimist. He applauds the efforts of South Korea, Japan, and the European Robotics Research Network to develop guidelines for the ethical aspects of robot use, and chides the U. S. for lagging in this area. He uses phrases like "robot responsibilities and rights" and speculates that the main reason this country hasn't developed robot ethics is that many robots or robot-like machines are used by the military. He wants us specifically to explore the question of whether "biological and artificial beings can share this world as equals." He winds up with the hope that we might all aspire to the outcome of a 1938 story in which a man married a robot. That is, he looks forward to the time that all of us, like the lovers in countless fairy tales, can be "living, as one day all humans and robots might, happily ever after."
Well. Hope is a specifically human virtue, and is not to be thoughtlessly disparaged. But Sawyer has erred in blurring some vitally important distinctions that often get overlooked in discussions about the present and future role of robots in society.
I do not know anything about Sawyer's core beliefs and philosophy other than what he said in his editorial. But I hope he writes his fiction more carefully than he writes editorials.
The key question is whether a machine described by that term is under the control of a human being, and to what extent that control is exercised. He begins his editorial with the story of how a remotely piloted vehicle dropped a bomb on two people who looked like they were planting an explosive device in Iraq. He terms this vehicle, which was undoubtedly under the continuous control of a human operator, a "quasi-robot." No doubt it contains numerous servomechanisms to relieve the operator from tedious hand-controlled steering and stabilization duties, but to call a remotely controlled bomber a "quasi-robot" is to give it a degree of autonomy which it does not possess.
Autonomy is a relative term. There is no entirely autonomous (the word's roots mean "self-governing") being in the universe except God. The issue of autonomy is a red herring that distracts attention from the real question, which is this: is it even possible for a human-made machine to possess moral agency?
Now I've got to explain what moral agency is. We are used to the idea that children below a certain age are not allowed to enter into contracts, marry, smoke, or drink. Why is this? Because society has rendered a judgment that they are in general not mature enough to exercise independent (autonomous) moral judgment about these matters. They are not old enough to be regarded as moral agents in every respect under law. Of course, even young children seem to have some built-in ability to make moral judgments. Isn't "That's not fair!" one of the favorite phrases in the six-year-old set? We accord certain rights and responsibilities to humans as they mature because we recognize that they, and only they, can act as moral agents.
Sawyer's mistake (or one of them, anyway) is to assume that as progress in artificial intelligence and robotics progresses, robots will mature essentially like humans do and will be able to behave like moral agents. I would point out that this achievement is far from being demonstrated. But even if moral agency is simulated some day by a robot in a realistic way indistinguishable from humanity, this fact will always be true: machines have been, are, and always will be the products of the human mind. As such, the human mind or minds which create them also possess the ultimate moral responsibility for the robot's actions and behavior, no matter how seemingly autonomous the robot becomes. So the robot can have no "rights and responsibilities"—those are things which only moral agents, namely humans, can possess.
This fact is illustrated by one of Sawyer's own examples. He cites the case of a $10 million jury award to a man who was injured back in 1979 by a robot, probably an industrial machine. You can bet that in 1979, the robot in question was no autonomous R2D2—it was probably something like one of those advanced welders that you see in automotive ads, the ones that zip around making ten welds in the time it takes a human to make one. I merely note that the injured party did not sue the robot for $10 million—he sued the robot's operators and owners, because everybody agrees that if a machine causes injury, and one or more humans are responsible for the actions of the machine, that the humans are at fault and bear the moral responsibility for the machine's actions.
Another distinction Sawyer fails to make is the difference between good and necessary laws regulating the development and use of robots as robots, and the entirely pernicious and unnecessary idea of treating robots as autonomous moral agents. But as I'm out of space for today, I will take this question up next week.
Sources: Sawyer's editorial appeared in the Nov. 16, 2007 issue of Science, vol. 318, p. 1037. I addressed some issues related to the question of robot ethics in my blog "Are Robots Human? or, Are Humans Robots?" for July 30, 2007. Bob Sawyer's webpage is at http://sfwriter.com.
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment