Showing posts with label Up With Authority. Show all posts
Showing posts with label Up With Authority. Show all posts

Monday, April 11, 2016

Will Robots Ever Have Moral Authority?


Robots build cars, clean carpets, and answer phones, but would you trust one to decide how you should be treated in a rest home or a hospital?  That's one of the questions raised recently by a thoughtful article in the online business news journal Quartz.  Journalist Olivia Goldhill interviewed ethicists and computer scientists who are thinking about and working on plans to enable computers and robots to make moral decisions.  To some people, this smacks of robots taking over the world.  Before you get out the torches and pitchforks, however, let me summarize what the researchers are trying to do.

Some of the projects are nothing more than a type of expert system, a decision-making aid that has already found wide usefulness in professions such as medicine, engineering, and law.  For example, the subject of international law can be mind-numbingly complicated.  Researchers at the Georgia Institute of Technology are trying to develop machines that will ensure compliance with international law by programming in all the relevant codes (in the law sense) so that the coding (in the computer-science sense) will lead to decisions or outcomes that automatically comply with the pertinent statutes.  This amounts to a sort of robotic legal assistant with flawless recall, but one that doesn't make final decisions on its own.  That would be left to a human lawyer, presumably.

Things are a little different with a project that a philosopher Susan Anderson and her computer-scientist husband Michael Anderson are working on:  a program that advises healthcare workers caring for elderly patients.  Instead of programming in explicit moral rules, they teach the machine by example.  The researchers take a few problem cases and let the machine know what they would do, and after that the machine can deal with similar problems.  So far it's all a hypothetical academic exercise, but in Japan, where one out of every five residents is over 65, robotic eldercare is a booming business.  It's just a matter of time until someone installs a moral-decision program like the one the Andersons are developing in a robot that may be left on its own with an old geezer, such as the writer of this blog, for example.

What the Quartz article didn't address directly is the question of moral authority.  And here is where we can find some matters for genuine concern.

Many of the researchers working on aspects of robot morality evinced frustration that human morality is not, and may never be, reducible to the kind of algorithms that computers can execute.  Everybody who has thought about the question realizes that morality isn't as simple and straightforward as playing tick-tack-toe.  Even the most respected human moral reasoners will often disagree about the best decision in a given ethical situation.  But this isn't the fundamental problem in implementing moral reasoning in robots.

Even if we could come up with robots who could write brilliant Supreme Court decisions, there would be a basic problem with putting black robes on a robot and seating it on the bench.  As most people will still agree, there is a fundamental difference in kind between humans and robots.  To avoid getting into deep philosophical waters at this point, I will simply say that it's a question of authority.  Authority, in the sense I'm using it, can only vest in human beings.  So while robots and computers might be excellent moral advisers to humans, by the nature of the case it must be humans who will always have moral authority and who make moral decisions. 

If someone installs a moral-reasoning robot in a rest home and lets it loose with the patients, you might claim that the robot has authority in the situation.  But if you start thinking like a civil trial lawyer and ask who is ultimately responsible for the actions of the robot, you will realize that if anything goes seriously wrong, the cops aren't going to haul the robot off to jail.  No, they will come after the robot's operators and owners and programmers—the human beings, in other words, who installed the robot as their tool, but who are still morally responsible for its actions. 

People can try to abdicate moral responsibility to machines, but that doesn't make them any less responsible.  For example, take the practice of using computerized credit-rating systems in making consumer loans.  My father was a loan officer at a bank in the 1960s before such credit-rating systems came into widespread use.  He used references, such bank records as he had access to, and his own gut feelings about a potential customer to decide whether to make a loan.  Today, most loan officers have to take a customer's computer-generated numerical credit rating into account, and the job of making a loan is sometimes basically a complicated algorithm that could almost be executed by a computer. 

But automation did not stop the banking industry from running over a cliff during the housing crash of 2007.  Nobody blamed computers alone for that debacle—it was the people who believed in their computer forecasts and complex computerized financial instruments who led the charge, and who bear the responsibility.  The point is that computers and their outputs are only tools.  Turning one's entire decision-making process over to a machine does not mean that the machine has moral authority.  It means that you and the machine's makers now share whatever moral authority remains in the situation, which may not be much.

I say not much may remain of moral authority, because moral authority can be destroyed.  When Adolf Hitler came to power, he supplanted the established German judicial system of courts with special "political courts" that were empowered to countermand verdicts of the regular judges.  While the political courts had power up to and including issuing death sentences, history has shown that they had little or no moral authority, because they were corrupt accessories to Hitler's debauched regime.

As Anglican priest Victor Austin shows in his book Up With Authority, authority inheres only in persons.  While we may speak colloquially about the authority of the law or the authority of a book, it is a live lawyer or expert who actually makes moral decisions where moral authority is called for.  Patrick Lin, one of the ethics authorities cited in the Quartz article, realizes this and says that robot ethics is really just an exercise in looking at our own ethical attitudes in the mirror of robotics, so to speak.  And in saying this, he shows that the dream of relieving ourselves of ethical responsibility by handing over difficult ethical decisions to robots is just that—a dream. 

Sources:  The Quartz article "Can We Trust Robots To Make Moral Decisions?" by Olivia Goldhill appeared on Apr. 3, 2016 at http://qz.com/653575/can-we-trust-robots-to-make-moral-decisions/.  (I thank my wife for pointing it out to me.)  The statistic about the number of aged people in Japan is from http://www.techinsider.io/japan-developing-carebots-for-elderly-care-2015-11, and my information about Hitler's political courts appears on the website of the Holocaust Memorial Museum at https://www.ushmm.org/wlc/en/article.php?ModuleId=10005467.  Victor Lee Austin's Up With Authority was published in 2010 by T&T Clark International.

Monday, August 13, 2012

Questioning “Question Authority”


Engineers work in an authority structure that coordinates their individual efforts with the larger purposes of a corporation, a government, or an entire industry.  Authority is one of those taken-for-granted concepts that we don’t often give much thought to.  There are those who view all authority with suspicion, and for some reason they seem especially widespread in New England, where “Question Authority” bumper stickers are almost as common as license plates; at least they were when I lived there in the 1990s.  The question I’d like to ask today is, can you be a good engineer and question authority too?   That is, is it consistent to be simultaneously an ethical engineer, and to maintain a fundamentally skeptical and judgmental attitude toward all authorities?

For help, I will turn to Victor Lee Austin, whose book Up With Authority is the best explication of the many ramifications of the idea of authority that I have ever seen.  Although Austin is himself a theologian, he draws support from philosophers such as Yves R. Simon and Michael Polanyi, and the point he makes that bears on our question is one that believers and non-believers alike can understand.

Austin says early in his book that authority has a dual aspect.  Normally we think of a person in authority as having power to decide important matters.  This is the facet of authority that first comes to mind when I think about authority with respect to engineering.  In an architectural firm, for example, only certain licensed architects and engineers are authorized to sign off on blueprints (or whatever the electronic equivalent is these days).  But using the word “authorized” in that way brings up a second aspect of authority.

Authorities don’t just get up one day and declare themselves authorities.  They have to be authorized.  In the case of licensed engineers, the state board in charge of licensing engineers authorizes the engineer to sign off on designs.  So authorities must receive their authority from, well, other authorities.  And authorities, as Austin points out, are ultimately other persons.   Even when we cite a licensing board or a book as an authority, we really refer to the person or people behind these intermediate entities.  So you can’t have authority without speaking of authorities, that is, persons who have authority.

That raises the structural question of where authority ultimately comes from.  I mean, if A is authorized by B and B is authorized by C through F, can we ever trace the lines of authority to their final source?  Where does the buck stop, in other words?

Austin, being a theologian, spills these particular beans early in the book.  The ultimate authority, he says, is God.  But God, being a “clean different kind of a thing” from anything or anyone else, is not simply another link in the chain of authority.  The clearest way God exerts authority is through his people, that is, believers, although he has many other ways of doing so.  But if you don’t believe in God, where does that leave you with respect to authority?

If you believe that human beings are the highest form of sentient life, then human beings must also be the ultimate source of authority.  And this hypothesis, if you want to call it that, appears to cover a lot of ground, at least if you don’t look too deeply.  For the nonbeliever as well as the believer, authority forms a complex web of interrelated authorizations and mutual consents.  Take an engineering licensing board as an example.  The people on the board are (or should be) licensed engineers themselves, whose licensing authority comes from a state government, but government itself acknowledges their authority by recognizing them as technically qualified to authorize other engineers.  And most of what Austin says about the various kinds of authority—social, epistemic (having to do with knowledge), and political—is supported by philosophical arguments that nonbelievers can at least understand, if not necessarily agree to.

So if authority is a necessary thing that engineering cannot function without, what about questioning authority?  Austin covers this in a discussion of disputed authority.  Human authorities make mistakes, and that means unquestioning obedience to all authority is an overly simplistic way to live.  But he points out that attitude is everything in a situation where you believe an authority above you is in error.

On the one hand, you can allow yourself to rebel against the authority and all its works.  You may let yourself have thoughts like,  “Well, if that’s what he’s going to do, I hope the whole project goes to smash.”  With an attitude like this, you are one with the “Question Authority” bumper stickers and in fundamental revolt against the entire organization.  If this is your attitude, quitting your job would be more honorable than staying and undermining the enterprise from within.

The attitude Austin encourages in situations where authority must be questioned is one that philosopher Michael Polanyi exemplified before he left science to pursue philosophy.  Polanyi made a fundamental but unexpected discovery about an aspect of surface chemistry, but it took him upwards of twenty years of persistent research and accepting the repeated rejection of his papers before he managed to convince the scientific community of the truth of what he had found.  Even as the peer-review process turned against him, he respected the basic authority structure it represented and worked within its constraints to make the truth known.

This attitude toward authority, of respectful disagreement while preserving the basic structure, goes a long way toward summarizing a lot of engineering-ethics thought.  A whole book could be written about engineering ethics and authority, and while I’m not going to write it, it would be a good book to read.

So what’s the answer to our question?  Normal engineering requires one both to be an authority and to be under authority.  Anyone who arrogates authority to himself or herself without respecting superior authorities will not last long in engineering (or most other fields, either).  But now and then, you may find that your authorities, whoever they are, have made an error, ranging from a mistake in a textbook to an order to falsify test records for an engineering project.  It will then be your role to deal respectfully but truthfully with the error in a way that preserves the overall authority structure, but moves the organization toward the freedom for human flourishing that Austin recognizes as the ultimate purpose of all authority. 

Sources: Victor Lee Austin’s book Up With Authority:  Why We Need Authority to Flourish as Human Beings (New York:  T&T Clark International, 2010) was brought to my attention by an interview with the author on the Mars Hill Audio Journal (www.marshillaudio.org).