When I was teaching an engineering ethics module for a few
semesters, one of the first things I asked the students to do was to spend five
minutes writing an answer to this question: “How do you tell the difference
between right and wrong conduct?”
The responses usually fell into three categories.
Many people would say that they rely on what amounts to
intuition, a “gut feeling” that a course of action is right or wrong. Nearly as popular was the response that
they look to how other people would act in similar circumstances. Very rarely, a student would say that
he relies on religious guidelines such as the Ten Commandments or the
Quran. These results are
consistent with a claim by neuroscientist Josh Greene that many of our moral
decisions are guided, if not determined, by the way our brains are wired. But we can rise above our instinctive
moral habits by consciously thinking about our ethical dilemmas and applying
reason to them.
An article in Discover
magazine outlines the research Greene and others have done with
sophisticated brain imaging techniques such as fMRI (functional magnetic
resonance imagining), which indicates spots in the brain that get more active
when the mind is engaged in certain activities. Greene finds that routine ethical choices such as whether to
get up in the morning are handled by lower-level parts of the brain that we
share with other less-developed animals.
But when he poses hard ethical dilemmas to people, it is clear that the more
sophisticated reasoning areas of the brain go to work to deal with reasoning
about probabilities, for example, as well as the parts that control simpler
instinctive actions.
One of the ethical dilemmas Greene uses is a form of the
“trolley problem” conceived by some philosophers as a test of our ethical
reasoning abilities. As
Philippa Foot posed the problem in
1967, you are asked to assume that you are the driver of a tram or trolley that
is out of control, and the only choice of action you have is which track to
follow at an upcoming switch.
There is one man working on the section of track following one branch of
the switch, and five men on the other branch. Which branch do you choose, given that someone is going to
be killed either way?
Greene has found that these and similar hard-choice ethical
problems cause the brain to light up in objectively different ways than it does
when a simpler question is posed such as, “Is it right to kill an innocent
person?” Whether or not these
findings make a difference in how you approach ethical decision-making depends
on things that go much deeper than Greene’s experiments with brain analysis.
But first, let me agree with Greene when he says that the
world’s increasing complexity means that we often have to take more thought
than we are used to when making ethical decisions. One reason I favor formal instruction in engineering ethics
is that the typical gut-reaction or peer-pressure methods of ethical
decision-making that many students use coming into an ethics class, are not adequate
when the students find themselves dealing after graduation with complex
organizations, multiple parties affected by engineering decisions, and
complicated technology that can be used in a huge number of different ways. Instinct is a poor guide in such
situations, and that is why I encourage students to learn basic steps of
ethical analysis so that they are at least prepared to think about such situations
with at least as much brain power as they would use to solve a technical
problem. This is a novel idea to
most of them, but it’s necessary in today’s complex engineering world.
That being said, I believe Greene, and many others who take
a materialist view of the human person, are leaving out an essential fact about
moral reasoning and the brain. The
reigning assumption made by most neuroscientists is that the self-conscious
thing we call the mind is simply a superficial effect of what is really going
on in the brain. Once we figure
out how the brain works, they believe, we will also understand how the mind
works. While it is important to
study the brain, I am convinced that the mind is a non-material entity which
uses the brain, but is not reducible to the brain. And I also believe we cannot base moral decisions upon pure
reason, because reason always has to start somewhere. And where you start has an immense influence on where you
end up.
As a Christian supernaturalist, I maintain that God has put
into every rational person’s heart a copy, if you will, of the natural laws of
morality. This is largely, but not
exclusively, what Greene and other neuroscientists would refer to as
instinctive moral inclinations, and they would trace them back to the brain
structures they claim were devised by evolution to cope with the simpler days
our ancestors lived in. (If they
really think ancient times were simpler, try living in the jungle by your wits
for a week and see how simple it is.)
God has also made man the one rational animal, giving him the ability to
reason and think, and God intends us to use our minds to make the best possible
ethical decisions in keeping with what we know about God and His revealed
truth. This is a very different
approach to ethics from the secular neuroscience view, but I am trying to make
vividly clear what the differences are in our respective foundational beliefs.
So both Greene and I think there are moral decisions that
can be made instinctively, and those that require higher thought
processes. But what those higher
thought processes use, and the assumptions they start from, are very different
in the two cases. I applaud Greene
for the insights he and his fellow scientists have obtained about how the mind
uses the brain to reach moral decisions.
But I radically disagree with him about what the outcomes of some of
those decisions should be, and about the very nature of the mind itself.
Sources: The Discovery
magazine online version of the article on Josh Greene’s research can be found
in the July-August 2011 edition at http://discovermagazine.com/2011/jul-aug/12-vexing-mental-conflict-called-morality. I also referred to the Wikipedia
article on “Trolley problem.”
No comments:
Post a Comment