Monday, September 30, 2019

Jonathan Franzen Gives Up On Controlling the Climate


Jonathan Franzen is a novelist and also writes essays that are published in places like The New Yorker.  As he admits, he's not a scientist or a policy wonk, but that doesn't keep him from putting his oar in on climate change. 

In a recent essay posted on The New Yorker's website entitled "What If We Stopped Pretending?" Franzen gives what at first glance appears to be a counsel of despair.

First, he admits that anybody under thirty is "all but guaranteed" to witness what he calls the "radical destabilization of life on earth—massive crop failures, apocalyptic fires, imploding economies, epic flooding, hundreds of millions of refugees fleeing regions made uninhabitable by extreme heat or permanent drought."  This will happen when "climate change, intensified by various feedback loops, spins completely out of control."  The only way to keep this from happening, according to authorities he cites such as the Intergovernmental Panel on Climate Change, is if every major greenhouse-gas-emitting nation on the planet imposes what amounts to a climate dictatorship:  instituting "draconian conservation measures, shut[ting] down much of its energy and transportation infrastructure, and completely retool[ing] its economy."  And that means everybody, not just folks who agree with the idea.  And here he gets personal:  "Making New York City a green utopia will not avail if Texans keep pumping oil and driving pickup trucks."  (I live in Texas, but I don't personally drive a pickup truck.)

Then he says in effect, "Hey, I'm a realist.  This isn't going to happen.  So you know what?  I'm giving up on it.  We might as well face it:  the apocalypse is coming, and we better just get ready for it."  We shouldn't quit trying to reduce carbon emissions, but we also shouldn't con ourselves into believing that our little token individual actions are going to make much difference. 

He winds up his essay by encouraging people to make your own little corner of the world better in whatever way you can—improving democratic governance, helping the homeless, and just generally being a good citizen, whether or not it makes a difference in climate change.  "To survive rising temperatures, every system, whether of the natural world or of the human world, will need to be as strong and healthy as we can make it."  In other words, we should fight smaller battles we have a reasonable chance of winning instead of putting all our eggs in the basket of averting climate change.

There is a syndrome that workers in the helping professions call "compassion fatigue."  Even if a naturally compassionate person chooses a job such as assisting Alzheimer's patients or children with terminal cancer, constantly having to come up with sympathy for someone who isn't going to get better can be tremendously draining.  And after months or years of such work, some people simply burn out—they can't take it anymore. 

Something like this seems to have happened to Franzen.  If he's like many people who see climate change as the most important existential threat to humanity, it's the kind of thing that you can never quite put out of your mind.  If you're not actively part of the solution, out there with Greta Thunberg protesting on the steps of the UN, then you're part of the problem merely by living a normal life in the U. S.  It's understandable that Franzen would choose to unburden himself by saying publicly,"Look, let's face it.  The train's coming at us in the tunnel and there's no way out.  Let's use the time we have to make things better, rather than fooling ourselves into thinking we can stop the train."

I'm not a climate scientist either, but I'm willing to make a prediction that I feel very confident about.  The way that climate change actually plays out is not going to fit anybody's prediction exactly, simply because it's far too complicated and long-term for anyone to predict with accuracy.

In 2018, the peak level of carbon dioxide in the atmosphere was 407 parts per million, up about 2.5 ppm from the previous year.  20 million years ago, it was about that high, and the all-time high record for carbon dioxide, according to various estimates that scientists have made, was around 2000 ppm some 200 million years ago.  So it's not like the planet has ever seen such high levels before.  Life survived, although many species went extinct and others arose to take their places.

Now admittedly, we are doing a radical thing to the planet, and there will be consequences.  But just as the way an individual human deals with a threatening crisis affects the outcome, the way human beings deal with what may turn into a climate crisis will also affect the future of humanity. 

When Franzen writes that climate change may "spin out of control," I would point out that strictly speaking, climate has never been under our control.  True, you can adjust a thermostat that is labeled "Climate Control," but its influence is limited to your house.  For all of human history, the weather is something that human beings have simply had to accept, not something they could control in any meaningful sense.  We are now engaged in the first-ever unintentional attempt at climate control, or at any rate climate influence, by emitting so much carbon dioxide, and in the coming years and decades we will be scrambling to deal with the consequences. 

But not in the way Franzen fantasizes in his scenario to stop worldwide emissions.  If the world really shut down much of its energy and transportation infrastructure, that is in itself would cause economies to implode.  So in that case the cure for climate change would be just as bad as the disease. 

The only way humans have survived on this planet as long as we have is that we are adaptable.  If crops start failing in some parts of the world, other parts will get better.  If coastlines shrink, people have the ability to move, assuming their governments will let them.  Franzen has caught a lot of flak for his essay, but I think he ends up in a better place than a lot of other people who keep banging the same drum in favor of a global climate dictatorship.  I agree with his advice to do what you can to limit climate change, but mainly, start with yourself to be a better person and to make the part of the world you can control to be a better place, no matter how warm it gets.

Sources:  Jonathan Franzen's essay "What If We Stopped Pretending?" appeared on Sept. 8, 2019 on The New Yorker website at https://www.newyorker.com/culture/cultural-comment/what-if-we-stopped-pretending.  For historical numbers on carbon dioxide levels, I consulted a graph published in Nature at https://www.nature.com/articles/ncomms14845/figures/4. 

Monday, September 23, 2019

Moral Machines?


By now you may be used to asking your phone or Siri or Alexa questions and expecting a reasonable answer.  The dream of Alan Turing in 1950 that one day, computers might be powerful enough to fool people into thinking they were human is realized every time someone dials a phone tree and thinks the voice on the other end is a human when it's actually a computer.

The programmers setting up artificial-intelligence virtual assistants such as Siri and human-sounding phone trees aren't necessarily trying to deceive consumers.  They are simply trying to make a product that people will use, and so far they've succeeded pretty well.  Considering the system as a whole, the AI part is still pretty low-level, and somewhere in the back rooms there are human beings keeping track of things.  If anything gets too out of hand, the back-room folks stand ready to intervene.

But what if it was computers all the way up?  And what if the computers were by some meaningful measure, smarter overall than humans?  Would you be able to trust what they told you if you asked them a question? 

This is no idle fantasy.  Military experts have been thinking for years about the hazards of deploying fighting drones and robots with the ability to make shoot-to-kill decisions autonomously, with no human being in the loop.  Yes, somewhere in the shooter robot's past there was a programmer, but as AI systems become more sophisticated and even the task of developing software gets automated, some people think we will see a situation in which AI systems are doing things that whole human organizations do now:  buying, selling, developing, inventing, and in short, behaving like humans in most of the ways humans behave.  The big worrisome question is:  will these future superintelligent entities know right from wrong?

Nick Bostrom, an Oxford philosopher whose book Superintelligence has jacket blurbs from Bill Gates and Elon Musk, is worried that they won't.  And he is wise to do so.  In contrast to what you might call logic-based intellectual power, in which computers already surpass humans, whatever it is that tells humans the difference between right and what is wrong is something that even we humans don't have a very good handle on yet.  And if we don't understand how we can tell right from wrong, let alone do right and avoid wrong, how do we expect to build a computer or AI being that does any better?

In his book, Bostrom considers several ways this could be done.  Perhaps we could speed up natural evolution in a supercomputer and let morality evolve the same way it's done with human beings.  Bostrom drops that idea as soon as he's thought of it, because, as he puts it, ""Nature might be a great experimentalist, but one who would never pass muster with an ethics review board—contravening the Helsinki Declaration and every norm of moral decency, left, right, and center." (The Helsinki Declaration was a document signed in 1964 that sets out principles of ethical human experimentation in medicine and science.) 

But to go any farther with this idea, we need to get philosophical for a moment.  Unless Bostrom is a supernaturalist of some kind (e. g. Christian, Jew, Muslim, etc.), he thinks that humanity evolved on its own, without help or intervention, and is a product of random processes and physical laws.  And if the human brain is simply a wet computer, as most AI proponents seem to believe, one has to say it has programmed itself, or at most that later generations have been programmed (educated) by earlier generations and life experiences.  However you think about it in this context, there is no independent source of ideal rules or principles against which Bostrom or anyone else could compare the way life is today and say, "Hey, there's something wrong here." 

And yet he does.  Anybody with almost any kind of a conscience can read the news or watch the people around you, and see stuff going on that we know is wrong.  But how do we know that?  And more to the point, why do we feel guilty when we do something wrong, even as young children? 

To say that conscience is simply an instinct, like the way birds know how to build nests, seems inadequate somehow.  Conscience involves human relationships and society.  The experiment has never been tried intentionally (thank the Helsinki Declaration for that), but a baby reared in total isolation from human beings—well, something close to this has happened by accident in large emergency orphanages, and the baby typically dies.  We simply can't survive without human contact, at least right after we're born. 

And dealing with other people allows for the possibility of hurting others, and I think that is at least the practical form conscience takes.  It asks, "If you do that terrible thing, what will so-and-so think?"  But a well-developed conscience keeps you from doing bad things even if you were alone on a desert island.  It doesn't even let you live with yourself at peace if you've done something wrong.  So if conscience is simply a product of blind evolution, why would it bother you if you did something that never hurt anybody else, but was wrong anyway?  What's the evolutionary advantage in that?

Bostrom never comes up with a satisfying way to teach machines how to be moral.  For one thing, you would like to base a machine's morality on some logical principles, which means a moral philosophy.  And as Bostrom admits, there is no generally accepted system that most moral philosophers agree on, which means most moral philosophers are wrong about morality. 

Those of us who believe that morality derives not from evolution, or experience, or tradition, but from a supernatural source that we call God, have a different sort of problem.  We know where conscience comes from, but that doesn't make it any easier to obey it.  We can ask for help, but the struggle to accept that help from God goes on every day of life, and some days it doesn't go very well.  And as for whether God can teach a machine to be moral, well, God can do anything that isn't logically contradictory.  But whether he'd want to, or whether he'd just let things take their Frankensteinian course, is not up to us.  So we had better be careful. 

Sources:  Nick Bostrom's Superintelligence:  Paths, Dangers, Strategies was published in 2014 by Oxford University Press.

Monday, September 16, 2019

Facing Google In Your Living Room

An article on cnet.com recently described how Google's new smart assistant, called Google Nest Hub Max, uses facial recognition technology to tell who is talking with it.  This feature has raised privacy concerns, as Google has admitted that they reserve the right to upload facial data from it to the cloud to help improve "product experience."  But whatever Google does legitimately, a hacker might be able to do too, and so we are approaching a time when the telescreens of George Orwell's dystopian fantasy novel Nineteen Eighty-Four have become a reality—not because of the unilateral command of a totalitarian government (at least not in the U. S.), but because we want what they can do.

For those unfamiliar with the novel, Orwell's book was a warning to the free world to beware of what a dictatorship could do with communications technologies of the future.  Telescreens were two-way televisions on which propaganda by a dictator known only as Big Brother is transmitted, and through which images of whoever is watching are transmitted back to the party's central headquarters.  Orwell was simply extrapolating the efforts of regimes such as the Nazis of the 1930s and the Soviet Union of the 1940s to spy on their populace twenty-four hours a day to enforce total obedience to the regime. 

At the time the novel was published in 1949, no one took the telescreen-spying idea very seriously, because it would take a huge number of human monitors to spy on a significant number of people.  Carried to its logical extreme, the only way the government could watch everybody would be if half the population spied on the other half, and then took turns. 

But neither Orwell nor anybody else at the time reckoned on the development of advanced artificial-intelligence (AI) systems using facial recognition technology.  In China, the government is deploying many thousands of cameras and taking facial data from millions of people with the intention of developing a Social Credit rating that measures how well you measure up to the regime's model of the ideal citizen.  If you have been caught on camera by computers going to suspicious places or meetings, your Social Credit score could go in the tank, making it hard to travel, get a job, or even stay out of jail. 

None of that is happening in the U. S., but the fact that a large corporation will now have electronic access to views in millions of private residences should at least give us pause. 

Leaving the hardware aside for the moment, let's examine the difference in motives between a government, such as in Nineteen Eighty-Four, spying on its citizens for the purposes of controlling behavior, and a commercial entity such as Google using images to sell both its own services and advertising for others.  The government spying is motivated by suspicion and fear of what people might be doing while the government isn't watching them.  Whatever the regime sets out as an ideal of behavior, it watches for deviations from that ideal, and punishes those who deviate from it.  Participation is not voluntary, and people have to go to great lengths to avoid being spied on.

Now contrast that with a benign-looking thing such as the Google Nest Hub Max.  Nobody is going to make you buy one.  And if you do, there are ways of turning off the facial-recognition feature, though it will be less convenient to use.  And the device is intended to serve you, not the other way around.  It's sold with the vision portrayed in so many TV ads of people happily using it to make their lives better, not for means of social control like Orwell's telescreens. 

But maybe the differences are not as great as they first appear.  Both the telescreen and the Nest Hub Max are intended to change behavior.  If they don't, they have failed.  True, the ideal behavior that a totalitarian government wants and the ideal behavior that a company wants are two different things.  But neither ideal is the way the citizen-consumer was before the screen or Nest Hub shows up, namely, unwatched and unbenefited by the products or services that the company wants to sell.

Nobody should read this blog and then go around saying "Ahh, Stephan's saying Google is Big Brother and they're trying to take over our lives!"  That's not the comparison I'm making.  My point is that the mere fact of being watched by someone, or something that can inform someone about us, is going to change our behavior.  And that change by itself is significant.

Now, the change may not necessarily be bad.  Already, virtual audio assistant devices such as Alexa have been used in criminal cases when bad actors set them off, either by accident or on purpose, and the data thus generated has proved to be incriminating.  Though this is ancient history, I am told that in the days when some middle-class and upper-class people had servants, families tended to behave better when the servants were around, although I'm sure there were exceptions.  Alexa isn't Jeeves the butler, but as virtual assistants play a more significant role in domestic life, it's not beyond imagination to think that some of the worst behavior in homes—domestic abuse, for example—might be mitigated if the victim could call 911 by just shouting it instead of having to pull out a phone.

I'm not necessarily crying doom and gloom here.  Millions of people are already using virtual assistants with few if any problems, and adding two-way video to the mix will only increase the devices' capabilities.  But we are entering a new territory of connectivity here, and it's bound to have some effects that nobody has predicted yet.  Perhaps it's not too helpful to predict that there will be unpredictable effects, but right now that's all I can do at the moment.  Let's hope that the security features of the Nest Home Max are good enough to prevent nefarious use, and that people who buy them are truly happier with them than they were before. 

Sources:  The article "Google collects face data now.  Here's what it means and how to opt out" appeared on Sept. 11, 2019 at https://www.cnet.com/how-to/google-collects-face-data-now-what-it-means-and-how-to-opt-out/#ftag=CADf328eec.  I also referred to Wikipedia articles on Nineteen Eighty-Four and Google Home.  I thank my wife for pointing this article out to me.

Monday, September 09, 2019

Vaping Turns Deadly


At this writing, three people have died and hundreds more have become ill from a mysterious lung ailment that is connected with certain types of e-cigarettes.  The victims typically have nausea or vomiting at first, then difficulty breathing.  Many end up in emergency rooms and hospitals because of lung damage.

Most of the sufferers are young people in their teens and twenties, and all were found to have been  using vaping products in the previous three months.  Many but not all were using e-cigarettes laced with THC, the active ingredient in marijuana.  Others were vaping only nicotine, but some early analysis indicates that a substance called vitamin-E acetate was found in many of the users' devices.  It's possible that this oily compound is at fault, but investigators at the U. S. Centers for Disease Control (CDC) and the Food and Drug Administration (FDA) have not reached any conclusions yet. 

In fact, the two agencies have released different recommendations in response to the crisis.  The CDC is warning consumers to stay away from all e-cigarettes, but the FDA is limiting its cautions to those containing THC.  Regardless, it looks like the vaping party has received a damper that may change a lot of things.

So far, vaping and the e-cigarette industry is largely unregulated, unlike the tobacco industry.  It found its first mass market in China in the early 2000s.  The technology was made possible by the development of high-energy-density lithium batteries, among other things.  While vaporizers for medical use have been around since at least the 1920s, it wasn't possible to squeeze everything needed into a cigarette-size package until about fifteen years ago. 

Since then, vaping has taken off among young people.  A recent survey of  U. S. 12th-graders shows that about 20% of them have vaped in the last 30 days, and this is up from only about 11% in 2017, the sharpest two-year increase in the use of any drug that the National Institutes of Health has measured in its forty-some-odd year history of doing such surveys.

The ethical question of the hour is this:  has vaping become popular enough, mature enough, and dangerous enough, that some kind of regulation (either industrial self-policing or governmental oversight) is needed?  The answer doesn't hinge only on technical questions, but on one's political philosophy as well.

Take the extreme libertarian position, for example.  Libertarians start out by opposing all government activity of any kind, and then grudgingly allow certain unavoidable activities that are needed for a nation to be regarded as a nation:  national defense, for instance.  It's not reasonable to expect every household to defend itself against foreign aggression, so most libertarians admit the necessity of maintaining national defense in a collective way. 
           
But on an issue such as a consumer product, the libertarian view is "caveat emptor"—let the buyer beware.  If you choose to buy an off-brand e-cigarette because it promises to have more THC in it than the next guy's does, that's your business.  And if there's risk involved, well, people do all sorts of risky things that the government pays no attention to:  telling your wife "that dress makes you look fat" is one example that comes to mind. 

On the opposite extreme is the nanny-state model, favored generally by left-of-center partisans who see most private enterprises, especially large ones, as the enemy, and feel that government's responsibility is to even out the unfair advantage that huge companies have over the individual consumer.  These folks would regulate almost anything you buy, and have government-paid inspectors constantly checking for quality and value and so on. 

It's impractical to run your own bacteriological lab to inspect your own hamburgers and skim milk, so the government is supposed to do that for you.  Arguably, it's also impractical for vapers to take samples of their e-cigarette's goop and send it to a chemical lab for testing, and then decide on the basis of the results whether it's safe to use that particular product. 

My guess at this point is that sooner or later, probably sooner, the e-cigarette industry is going to find itself subject to government standards for something.  Exactly what isn't clear yet, because we do not yet know what exactly is causing the mysterious vaping illnesses and deaths.  But when we do, you can bet there will be lawsuits, at a minimum, and at least calls for regulation of the industry. 

Whether or not those calls are heeded will depend partly on the way the industry reacts.  Juul, currently the largest maker of vaping products, is one-third owned by the corporate entity formerly known as Philip Morris Companies.  In other words, the tobacco makers have seen the vaping handwriting on the wall, and are moving into the new business as their conventional tobacco product sales flatten or decline. 

The tobacco companies gained a prominent place in the Unethical Hall of Fame when they engaged in a decades-long campaign of disinformation to combat the idea that smoking could hurt or kill you, despite having inside information that it very well could.  In the face of an ongoing disaster such as the vaping illness, this ploy doesn't work so well.  But they could claim that only disreputable firms would sell vaping products that cause immediate harm, and pay for studies that show it's better than smoking and harmless for the vast majority of users.

Sometimes the hardest thing to do is be patient, and that's what we need to do right now, rather than rushing to conclusions that aren't supported by clinical evidence.  Investigators should eventually figure out what exactly is going on with the sick and dying vapers, and once we know that, we'll at least have something to act on.  Until then, if by chance anyone under 30 is reading this blog, take my advice:  leave those e-cigarettes alone. 

Monday, September 02, 2019

Lawyers in Space?


A recent Washington Post article highlights what would normally be a humdrum domestic dispute alleging identity theft.  The unusual feature of the dispute is that the party who allegedly accessed a bank account without permission did it from the International Space Station, and thus may have committed the first legally recognized crime in space.

Anywhere humans go, the lawyers can't be far behind.  While Shakespeare probably got a laugh in his play Henry VI when the criminal type Dick the Butcher said, "The first thing we do, let's kill all the laywers," the context was not a sober discussion of how to make a better society.  Dick and his rebel friends were imagining a fantasy world made to their liking, where all the beer barrels would be huge, all the prices low, and naturally, there wouldn't be any lawyers to get people like them into trouble.

Law-abiding citizens need have no fear of ordinate laws, and so it's only right that there are some treaties and international agreements that govern humans and human-made artifacts in space. 

The foundational agreement is something called the Outer Space Treaty, which over 100 countries have signed, including every nation that is currently capable of orbiting hardware in space.  Implemented in 1967, its most prominent theme is that space is for peaceful uses only.  It therefore prohibits keeping nuclear weapons in space.  It also forbids any country from claiming sovereignty over any part of outer space, including any celestial body.  So when the U. S. planted its flag on the moon, it was just a symbolic gesture, not the first step in creating a fifty-first state with zero population.

Right now, there are companies making quite serious plans to do space mining, build space hotels, and engage in other profit-making activity that would involve substantial amounts of investment of both hardware and human capital.  There's a concern that the Outer Space Treaty is silent on the question of individual or corporate ownership of space property, and unless we get more specific on what the rules are, such developments may be stifled.

I don't see any critical problem here, because we have abundant precedents in a similar situation:  the international laws governing ocean-going vessels.  The Disney Company puts millions of dollars into what amounts to floating hotels, and they quite happily manage to cope with the fact that while they own the ship, it travels in international waters and docks at various ports owned by other countries.  Of course, there are hundreds of years of tradition bound up in the law of the sea, and the same isn't true of space law.  But the fact that ocean-going commerce goes on quite smoothly for the most part shows such things can be done, and so that doesn't concern me at all.

What could throw the whole situation into doubt is if somebody finds a fantastically lucrative space-based enterprise.  Diamonds on the moon sounds like something that Edgar Rice Burroughs would cook up, but there are quite serious organizations out there planning to do things like mining asteroids.  And depending on what they find, we might see something like the rush of the Old World explorers to the New World, where they in fact did discover gold.  Like most naive fantasies, that discovery didn't work out quite as nicely as the explorers hoped, what with the abysmal treatment of Native Americans and the disastrous inflation that the introduction of huge amounts of gold caused in the economies of Europe. 

It's hard to imagine something similar happening as a result of a space-based discovery, but stranger things have happened.  The optimist in me, as well as numbers of Silicon Valley types who seem to think that space colonization is not only possible, but inevitable and represents the last best hope of humanity, would like to see the future of space exploration and settlement as another chance for us to get some things right.  After all, something like that was the motivation for many Europeans to make the arduous journey to the New World where unknown hardships awaited them.  Overall I'm glad they did, or else I would not have the opportunity to live in Central Texas today.

But the identity-theft case in the International Space Station reminds us that no matter what idealistic plans we make, we will take all our mental and behavioral baggage with us wherever we go.  That is why we will always need lawyers, whether in San Marcos or a moon base.  Because a certain number of us will misbehave beyond the boundaries that ordinary people around us can deal with, and so the law will have to get involved.  Right now, all the parties to the identity-theft dispute turned out to be U. S. citizens, so U. S. law applies.  But in the future, when space colonies (for lack of a better phrase) may want to set up their own independent governments, things may get considerably more complicated.  And complications mean lawyers.

One thing I haven't mentioned is the question of militarization in space.  President Trump recently announced the establishment of a Space Command, which is apparently a kind of umbrella under which the space activities of the various branches of the military will be gathered.  While the Outer Space Treaty prohibits "weapons of mass destruction" in space, it does nothing to stop nations from testing weapons or placing military personnel in space. 

It is perhaps inevitable that rivalries on the ground will end up being played out in space as well.  But we can hope that for the near future, anyway, the need for lawyers and law in space will be limited to minor issues such as the identity-theft case, and that we can view space as a place where for the time being, people can just get along.  But if they don't, I'm sure
lawyers will find a way to get involved.

Sources:  Deanna Paul's article "Space:  The Final Legal Frontier" appeared on Aug. 31 on the Washington Post website at https://www.washingtonpost.com/technology/2019/08/31/space-final-legal-frontier/.  I also referred to Wikipedia articles on the Outer Space Treaty and "Let's kill all the lawyers."