"Technology is neutral. It's only how it's used that can be good or bad."
Back in the 1960s and even up to the 1970s, a statement
along those lines was often the standard response you got from an engineer or
scientist if you raised questions about the dangers or moral implications of a
given invention. The neutrality
argument was used to defend radio, television, computers, and even nuclear
energy. But Sheila Jasanoff, for
one, would disagree.
Jasanoff teaches science and technology studies (STS) at
the Harvard Kennedy School. In an
editorial in the October edition of the journal IEEE Spectrum, Jasanoff told chief editor Susan Hassler that there
is no such thing as a value-neutral technology. Hassler was speaking with Jasanoff about her new book, The Ethics of Invention: Technology and the Human Future
(Norton, 2016), in which Jasanoff argues that every technology worthy of the
name is designed with some idea of the good in mind. And we don't get ideas of what is good only from technology
itself. That comes from the wider
culture, which invariably informs and shapes the motivations of those who
strive to create innovations that will do something that somebody, somewhere
will regard as good. Even the
terrorist assembling a kettle bomb in his basement thinks it will be good, in
his private sense, if the bomb goes off and kills people. So in that limited sense, every
technology is designed with some good in mind, and while the particular good
may be influenced by the technology, it is what the philosophers call
"logically prior to" the technology, at least most of the time.
So far so good.
But then Hassler goes on to say that (STS) programs such as the one
Jasanoff teaches in ought to be more closely integrated with the engineering
curricula of more schools, as they are already in a few places such as the
University of Virginia and Stanford.
Maybe if engineering students were obliged to take in-depth looks at the
social implications of technology, and STS students had to study more technical
subjects, we could avoid creating monsters that look good in the laboratory or
as prototypes, but end up causing disasters once they reach thousands or
millions of customers.
Hassler's position is one I'm in sympathy with. I spent seven years as an officer of
the IEEE's Society on Social Implications of Technology, and in the process met
a lot of interesting and thoughtful people who share Jasanoff's concern that,
as Hassler puts it, we seem to be stuck on a "hamster wheel of innovation,
disaster, and remediation."
In other words, the main way we seem to find out that a given technology
can be harmful is not by doing forward-thinking studies while it's still in the
planning stages, but by selling it on an industrial scale and then reaping the
adverse consequences when they become so obvious that we can't ignore
them.
Hassler complains that most engineering undergrads will
lump STS classes in with the other humanities as time-wasting compared to the
burdensome technical classes they must take in order to graduate. And by and large, she's right. This even goes for the subject that is
probably the most prominent educational intersection between engineering and
the humanities: engineering
ethics. Here at Texas State
University, philosophy courses are required for every undergraduate student on
campus, and engineering and philosophy faculty have worked together to get NSF
funding to sponsor an engineering-ethics-specific undergraduate philosophy
course. Hassler also cites
Stanford as a place where STS majors have to complete technical requirements as
well as humanities requirements.
But I would point out that, unless these humanities students go on to
get an advanced technical degree, they are not going to have the influence on
real-world innovations that engineering students would have.
I think the basic problem here is not educational, but
attitudinal. The type of person
who goes in for an engineering degree likes to think that he or she is going to
make a positive difference by helping to create innovative products and
services that, yes, are regarded as good by somebody. The basically optimistic mindset this requires is often at cross-purposes
to the mindset required in many STS subjects, which is that of a critical
stance. I'm not saying that all
STS people are anti-technologists.
Many of them are former engineers or engineering students whose
enthusiasm for their technical studies carried them beyond technical matters to
explore the wider social implications of that technology, and remain basically
supportive of it.
But to sustain a career, one must establish a basic
point of view, and answer a question like this: Am I going to join this technical field as a participant and
team player, not stopping to question the basic goodness of what I'm doing, but
taking reasonable precautions to avoid foreseeable harm? Or am I going to devote my life to
viewing this technology from the outside, observing its effects and consequences
on various organizations and groups of people, and thinking and writing about
that? It's not as simple a
division as action versus contemplation, but it comes close. And the fact of the matter is that many
of the adverse consequences of certain technologies, such as burning fossil
fuels, were such as to be invisible and undetectable until such time as it was
way too late to forestall any harm.
Some bad effects simply cannot be discovered until a technology is
already in widespread use.
I sympathize with Jasanoff's concern, and Hassler's wish
that STS was something that more engineers and scientists knew about. But I'm not sure that if we just had engineers
taking more STS courses and STS majors taking more engineering courses, that
the world would be much safer than it is now.
No comments:
Post a Comment