Monday, May 22, 2017

Your Money Or Your Data: The WannaCry Ransomware Attack


On May 12, thousands of users of Windows computers around the globe suddenly saw a red screen with a big padlock image and a headline that read, "Ooops, your files have been encrypted!"  It turned out to be a ransom note generated by an Internet worm called WannaCry.  The ransom demanded was comparatively small—about US $300—but the attack itself was not.  The most critical damage was caused in Great Britain where many National Health Service computers locked up, causing delays in surgery and preventing access to files containing critical patient data.  Fortunately, someone found a kill switch for the virus and so its spread was halted, but over 200,000 computers were affected in over 100 countries, according to Wikipedia.

No one knows for sure who implemented this attack, although we do know the source of the software that was used:  the U. S. National Security Agency, which developed something called the EternalBlue exploit to spy on computers.  Somehow it got into the wild and was weaponized by a group that may be in North Korea, but no one is sure. 

At this writing, the attack is mostly over except for the cleanup, which is costing millions as backup files are installed or re-created from scratch, if possible.  Experts recommended not paying the ransom, and it's estimated that the perpetrators didn't make much money on the deal, which was payable only in bitcoin, the software currency that is virtually untraceable. 

Writing in the New York Times, editorialist Zeynep Tufekci of the School of Information and Library Science at the University of North Carolina put the blame for the attack on software companies.  She claims that the way upgrades and security patches are done is itself exploitative and does a disservice to customers, who may have good reasons not to upgrade a system.  This was painfully obvious in Great Britain, where their National Health Service was running lots of old Windows XP systems, although the vast majority of the computers affected were running the more recent Windows 7.  Her point was that life-critical systems such as MRI machines and surgery-related instruments are sold as a package, and incautious upgrading can upset the delicate balance that is struck when a Windows system is embedded into a larger piece of technology.  She suggested that companies like Microsoft take some of the $100 billion in cash they are sitting on and spend some of it on free upgrades to customers who would normally have to pay for the privilege.

There is plenty of blame to go around in this situation:  the NSA, the NHS, Microsoft, and ordinary citizens who were too lazy to install patches that they had even paid for.  But such a large-scale failure of what has become by now an essential part of modern technological society raises questions that we have been able to ignore, for the most part, up to now.

When I described a much smaller-scale ransomware attack in this space back in March, I likened it to a foreign military invasion.  That analogy doesn't seem to be too popular right now, but I still think it's valid.  What keeps us from viewing the two cases similarly has to do with the way we've been trained to look at software, and the way software companies have managed to use their substantial monopolistic powers to set up conditions in their favor.

Historically, such monopolistic abuse has come to an end only through vigorous government action to call the monopoly to account.  The U. S. National Transportation Safety Board can conduct investigations and levy penalties on auto companies who violate the rules or behave negligently.  So far, software firms have almost completely avoided any form of government regulation, and the free-marketers among us have pointed to them as an example of how non-intervention by government can benefit an industry. 

Well, yes and no.  People have made a lot of money in the software and related industries—a few people, anyway, because the field is notorious for the huge returns it can give a few dozen employees and entrepreneurs who happen to get a good idea first, implement it, and dominate a new field (think Facebook).  But when you realize that the same companies charge customers over and over again for the ever-required upgrades and security patches (which are often bundled together so you can't keep the software you like without having it get hacked sooner or later), the difference between a software company and an old-fashioned protection racket where a guy flipping a blackjack in his hand comes in your candy store, looks around, and says, "Nice place you got here—a shame if anything should happen to it" becomes hard to distinguish in some ways.

Software performs a valuable service to billions of people, and I'm not calling for a massive takeover of software firms by the government.  And users of software have some responsibility for doing maintenance, assuming that maintenance is of reasonable cost and isn't impossibly hard to do, or leads to situations that make the software less useful.  But when a major disaster like WannaCry can cause such global havoc, it's time to rethink the fundamentals of how software is designed, sold (technically, it's leased, not sold), and maintained.  And like it or not, the U. S. market has a huge influence on these things.

Even the threat of regulation can have a most salutary effect on monopolistic firms, which to avoid government oversight often enter voluntarily into industry-wide agreements to implement reforms rather than let the government take over the job.  It's unlikely that the current chaos going on in Washington is a good environment in which to undertake this task, but there needs to be a coordinated, technically savvy, but also ethically deep conversation among the principals—software firms, major customers, and government regulators—to find a different way of doing security and upgrades, which are inextricably tied together. 

I don't know what the answer is, but companies like Microsoft may have to accept some form of restraint on their activities in exchange for remaining free of the heavy hand of government regulation.  The alternative is that we continue muddling along as we have been while the growth of the Internet of Things (IoT) spreads highly vulnerable gizmos all across the globe, setting us up for a tragedy that will make WannaCry look like a minor hiccup.  And nobody wants that to happen.

Sources:  Zeynep Tufekci's op-ed piece "The World Is Getting Hacked.  Why Don't We Dp More to Stop It?" appeared on the website of the New York Times on May 13, 2017, at https://www.nytimes.com/2017/05/13/opinion/the-world-is-getting-hacked-why-dont-we-do-more-to-stop-it.html.  I also referred to the Wikipedia article "WannaCry ransomware attack."  My blog "Ransomware Comes to the Heartland" appeared on Mar. 27, 2017.

Monday, May 15, 2017

India's Energy Future and Climate Change


In an article that appeared in May's Scientific American, Council on Foreign Relations Fellow Varun Sivaram shows that India's path of energy development could have a large impact on future greenhouse-gas emissions.  Unlike China, which currently pumps out about twice as much carbon into the air as the U. S., India's infrastructure is largely yet to be built.  And in that fact lies both a challenge and an opportunity.

It will help to get things in proportion if we compare greenhouse emissions and populations for China, the U. S., and India.  According to the U. S. Environmental Protection Agency, in 2014 the world leader of global carbon dioxide emissions was China, contributing about 30% of the total.  Next in line was the U. S., with 15%, and third was India, with 7%.  The much-ballyhooed Paris accords of 2015 committed India to an apparently almost meaningless limit, because Sivaram says "its overall commitment to curb emissions was underwhelming.  If the government just sat on its hands, emissions would rise rapidly yet stay within the sky-high limits the country set for itself in Paris."

By many measures, most citizens of India are still living in the same energy environment their ancestors occupied:  using dried cow dung, straw, charcoal, and firewood for domestic heating and cooking.  The lucky third or so who have access to more advanced fuel sources use either coal or oil.  The nation's electric grid is somewhat of a joke by Western standards, reaching less than a fourth of the population.  And those who get electricity can't count on it:  outages (both planned and accidental) are common, and government-inspired policies to keep rates low has resulted in chronic underinvestment that has further contributed to the grid's rickety status.

Unlike China, India has something approaching a democratic government, although with a heavy dose of socialist-style traditions left over from the Nehru years of the 1950s and 60s.  While the economy has improved greatly under more recent governments since the 1990s that have favored private enterprise and privatization of formerly government-owned enterprises, Sivaram points out that investment money is hard to come by.

Examining the two extremes of how things go from here, suppose that India follows the easier path trod already by China, exploiting readily-accessible fossil fuels and building coal-fired power plants to supply its increasing population of about 1.4 billion, which is due to outstrip China's population in a few years.  If that happens, the U. S. will no longer be the world's No. 2 carbon-dioxide emitter—India will be, and might even surpass China to become No. 1. 

Of course, this is a competition that no government wants to win.  But zooming down to the micro view of individual citizens, the meaning of drastic global-warming restrictions on future fossil-fuel use becomes more problematic.  Most Indian citizens do not drive cars, and the vast majority of motorized vehicles sold even today are motorbikes or three-wheel jitneys.  Mobility is something everyone wants, and as more Indians get better jobs and are able to save money to buy larger items, the market for automobiles could grow tremendously.  But that development would only exacerbate carbon-dioxide emissions.  The same people who want to drive would like to have plentiful, reliable electricity both for domestic uses and for things like agriculture and manufacturing.  But if power is generated with coal or oil, there goes more CO2.

In his article, Sivaram holds out an alternative energy future that could become reality, given enough willingness on the part of national and state governments and citizens generally.  Solar energy is abundant in the countryside, and the government is already deploying solar panels to power irrigation pumps, but on a small scale.  Given enough investment, the desperately-needed expansion of the electric grid could include the latest smart-grid technologies that would enable it to take advantage of wind and solar power, which otherwise would not fit easily into an old-fashioned grid designed for 24-hour-a day power sources.  And the nice thing is that little retrofitting will be required, because most of the needed grid does not yet exist today.

While coal and oil will be a large part of India's energy mix in the near future, another hope Sivaram has is that conservation measures will limit the increase in demand to less than it would be otherwise.  Rapid deployment of electric vehicles powered by renewable energy sources could help here, as well as an emphasis on energy-efficient appliances and buildings. 

The fly in this sweet-smelling ointment of the future, Sivaram admits, is the crying need for investment money.  And here is where things get murky.  In common with many other countries in Asia, India's regulatory environment is marred by complexity, delays, and corruption.  Even major infrastructure projects such as hydroelectric dams and grid improvements have been torpedoed by high interest rates, permit delays, and poor fiscal planning, resulting in abandoned projects and even bankruptcies.  These are not engineering problems.  These are social and government-policy problems, and it will take political courage and intelligence to make much progress in these areas.

With India halfway around the world, it's easy to ignore internal problems like these, but this academic semester just ending, I taught a graduate class for the first time in many years, and most of the students in it were from the Indian subcontinent.  Thirty years ago, most of them would have been from China, but there are plenty of Chinese universities that are as good or better than your average state school in the U. S. now, and so the new-graduate-student pool for middle-ranked U. S. universities has shifted south over the years.

If these students are like most foreign grad students, many of them will try to stay in the U. S.  But some will return to their native lands.  I hope that what they learn here about the social and political structure of the U. S. will help them realize that in many ways, India has a chance to avoid mistakes others have made before them.  Whatever your views on global warming, I think we can agree that it's a hard problem both to allow millions of people in India to enjoy some of the benefits of advanced technology that we in the U. S. have enjoyed for three generations, while avoiding preventable harm to the planet we all live on.  I hope the citizens of India can take advantage of their opportunities to work out this problem in the best way possible.

Sources:  The article "The Global Warming Wild Card" by Varun Sivaram appeared on pp. 48-53 of the May 2017 issue of Scientific American.  The EPA website from which I obtained 2014 data on carbon-dioxide emissions is at https://www.epa.gov/ghgemissions/global-greenhouse-gas-emissions-data.  I also referred to the Wikipedia articles on the demographies of China and India and the history of the Indian republic. 

Monday, May 08, 2017

The False Promise of Digital Storage for Posterity


Now that almost every book, photograph, artwork, article, news item, story, drama, or film is published digitally, we are supposed to rejoice that the old-fashioned imperfect and corruptible analog forms of these media—paper that ages, film that deteriorates—has been superseded by the ubiquitous bit, which preserves data flawlessly—that is, until it doesn't.  A recent article in the engineering magazine IEEE Spectrum highlights the problems that Hollywood is having in simply keeping around usable digital copies of their old films.  And "old" in this sense can mean only three or four years ago. 

It's not like there isn't a standard way of preserving digital copies of motion pictures.  About twenty years ago, a consortium of companies got together and agreed on an open standard for magnetic-tape versions of movies and other large-volume digital material called "linear tape-open" or LTO.  If you've never heard of it, welcome to the club.  An LTO-7 cartridge is a plastic box about four inches (10 cm) on a side and a little less than an inch thick.  Inside is a reel of half-inch-wide (12 mm) tape about three thousand feet (960 m) long, and it can hold up to 6 terabytes (6 x 1012 bytes) of uncompressed data.  Costing a little more than a hundred bucks, each cartridge is guaranteed to last at least 30 years—physically.

The trouble is, the same companies that came up with the LTO standard are part of the universal high-tech digital conspiracy to reinvent the world every two years.  Keeping something the same out of respect for the simple idea that permanence is a virtue is an entirely foreign concept to them.  Accordingly, over the last twenty years there have been seven generations of LTO tapes, and each one hasn't been backward-compatible for more than one or two generations. 

What this means to movie production companies that simply want to preserve their works digitally is this:  every three or four years at the outside, they have to copy everything they've got onto the new generation of LTO tapes.  And these tapes don't run very fast—it's not like burning a new flash drive.  Transferring an entire archive can take months and cost millions of dollars, but the customers are at the mercy of the LTO standard that keeps changing. 

According to the Spectrum article, Warner Brothers Studios has turned over the job of preserving their films to specialist film archivists at the University of Southern California, which already had a well-funded operation to preserve video interviews with Holocaust victims.  But USC faces the same digital-obsolescence issues that the studios are dealing with, and one USC archivist calls LTO tapes "archive heroin"—it's a thrill compared to the old analog archive methods, but it gets to be an expensive habit after a while.

And that gets us to a more fundamental question:  given limited resources, what should each generation preserve, in terms of intellectual output, for the next one?  And how should preservation happen?

For most of recorded history, preservation of old documents was left mostly to chance.  Now and then a forward-looking monarch would establish a library, such as the famous one in Alexandria that was established by Ptolemy I Soter, the successor of Alexander the Great, about 300 B. C.   It held anywhere from 40,000 to 400,000 scrolls, and lasted until the Romans conquered Egypt around 30 B. C., when it suffered the first of a series of fires that destroyed most of its contents. 

One can argue that the entire course of Western history would be different if all the works of the Greek philosopher Aristotle (384 B. C. - 322 B. C.) had been lost.  The way we came to possess what works we have of his is hair-raising.  After Aristotle died, his successor Theophrastus at the school where Aristotle taught, the Lyceum, inherited from Aristotle a large set of what we would call today lecture notes.   After Theophrastus died, he left them to Neleus of Scepsis, who took them from Athens, where the Lyceum was, back home to Scepsis, and stuck them in his cellar.  Then he died.  Evidently the Greek families held on to real estate back then, and it's a good thing too, because it wasn't till about 100 B. C., more than two centuries after Aristotle's passing, that Neleus's descendants had a garage sale or something, and a fellow named Apellicon of Teos found the manuscripts and bought them.  He took them back to Athens, where Apellicon's library was confiscated by the conquering Romans in 86 B. C.  Finally, some Roman philosophers realized what they had in Aristotle's works and started making copies of them around 60 B. C.

I won't even go into how most of Aristotle's works were lost again to everyone except Arabic scholars up to about 1200 A. D., but we've had enough ancient history for one blog.  The point is that historic preservation was left largely to chance until people began to realize the value of the past to the present in an organized way. 

While the movie industry deserves credit for laying out lots of money to preserve chunks of our visual cultural history, one must admit that their interests are mostly financial.  Once the people who see a movie when they're in their twenties die out, the only folks interested in such films are the occasional oddball historian or fans of specialty outlets such as the Turner Classic Films channel. 

The real problem with digital archives is not so much the fact that the technology advances so fast, although that could be alleviated.  It's the question that never has an answer until it's sometimes too late:  what is worth preserving? 

If you're a well-heeled library like the one at Harvard University, the answer is simple:  everything you get your hands on.  But most places are not that well off, so it's a judgment call as to what to toss and what to keep using the always-limited resources at hand.

Despite the best intentions of well-funded film archivists, my suspicion is that a few centuries hence, we will find that many of the works of most importance to the future, whatever they are, were preserved not on purpose, but by hair-raising combinations of fortunate accidents like the ones that brought us the works of Aristotle.  And if I'm wrong, well, chances are this blog won't be one of those things that are preserved.  So nobody will know.

Sources:  The article "The Lost Picture Show:  Hollywood Archivists Can't Outpace Obsolescence" by Marty Perlmutter appeared in the May 2017 issue of IEEE Spectrum and online at http://spectrum.ieee.org/computing/it/the-lost-picture-show-hollywood-archivists-cant-outpace-obsolescence?.  The story of how Aristotle's works came down to us is reported independently by at least two ancient sources, and so is probably pretty close to the truth, according to the Wikipedia article on Aristotle.  I also referred to Wikipedia articles on the Library of Alexandria and the Ptolemaic dynasty. 

Monday, May 01, 2017

New Cars Ain't What They Used to Be


A friend of ours whose age is somewhere north of seventy recently bought a new pickup truck.  Soon afterwards, in text messages she started calling herself "Keyfob."  When we asked why, she said, "Well, that's what my truck calls me.  When I get out of it it says, 'Keyfob has left the vehicle.'"

She has a new truck because she totaled her previous truck in a collision that she survived largely because of safety features that newer models have.  So no one should think I'm opposed to innovative technology in the automotive industry in general, especially when it contributes to safety.  But as Chicago Tribune reporter Robert Duffer recently pointed out, some of the innovations that carmakers have inflicted on new-car buyers recently can be annoying, confusing, or downright dangerous.

Duffer cites a J. D. Power survey of new-car-owner complaints that showed the category broadly described as "infotainment" was responsible for more complaints than anything else.  This includes things like touch screens, voice-activated commands, and touch-sensitive controls for radios and music players.  It turns out that by 2018, safety rules will mandate that every new car have a backup camera, and consequently a display screen will have to be somewhere in the driver's view.  Carmakers eager to get a competitive advantage are not going to leave such an opportunity alone, and you can expect they will pile more and more features into that screen in addition to simply displaying the backup camera output. 

Some of the problems with new cars stem from the fact that they are almost completely "fly-by-wire" in the sense that many driver outputs—accelerator, gearshift, and so on—don't do anything mechanical directly, but instead go to electronic sensors that run instructions through the car's CPU to execute commands, and similarly with the instrumentation that provides driver inputs.  Airline pilots, with their sophisticated and recurring training, managed the transition from mechanical airplane controls to fly-by-wire technology pretty well, but there were some glitches along the way even in that highly specialized realm.

Duffer provides evidence that when you take the average driver, whose total training to drive nowadays may consist in a few sketchy lessons under the reluctant tutelage of a parent decades ago, and plop him or her into a cockpit with literally dozens of new control surfaces, menus, options, and ways of doing things that used to be done basically the same way by automakers for decades but are now completely different, you're going to have problems.

Perhaps the most striking issue was the way some manufacturers misused the privilege of making the gearshift lever absolutely any way they want to now.  Back in the days of the column-mounted automatic gearshift lever, Duffer reminds us that the sequence "PRNDL" for "park-reverse-neutral-drive-low" was pretty standard.  Anybody back then could get into any car and at least know how to shift it.  But BMW and Fiat-Chrysler both went on the market in the last few years with gearshifts that defaulted to neutral, so the driver could turn off the engine and get out of the car with the vehicle still in neutral. 

For drivers who had developed the bad but understandable habit of relying on a car's transmission parking driveshaft-lock feature to keep the car from rolling, rather that setting the parking brake, this new feature was an accident waiting to happen.  And it did happen to a number of people, the most famous of whom was a Star Trek actor named Anton Yelchin who was pinned between his Jeep Cherokee and a brick column when his car rolled at him and crushed him to death.  Most of those cars have now been recalled to fix this issue, which never should have showed up in the first place.  

With freedom comes responsibility, and the new freedom that automakers enjoy to reinvent the driving experience comes with a responsibility to make sure that the average driver is not inconvenienced or worse by innovations that look attractive at first, but turn out to be annoying or dangerous. 

A lesson can be drawn from the early days of automobiles prior to 1925 or so, when there were literally dozens of carmakers vying for what promised to be a huge and growing market.  Henry Ford's Model T, produced in some form from 1908 all the way to 1927, is not a machine that your average driver today could get going without some lessons.  Even when an electric starter was added in 1919, the operator had to manipulate two steering-column-mounted levers (one was the throttle and the other was the spark-timing advance) and manage three foot pedals, two of which dealt with a mysterious planetary transmission that was part manual and part automatic.  By the mid-1920s, however, the accelerator had moved to the floor and the brake and clutch pedal position had stabilized in most newer makes, and there the matter stood until automatic transmissions came along. 

Then the question arose of where to put the automatic transmission controls.  It started out as a lever on the steering column, but even as early as the 1950s the designers started experimenting.  The ill-fated Edsel, for example, had a series of buttons on the dashboard to control the transmission, which probably led to problems like putting the car into reverse on the freeway when all you wanted to do was turn on the heater.  Eventually, with the advent of front bucket seats, the between-the-seats gearshift lever showed up, but even that standard has been tinkered with to the endangerment of the public, as the story of the Star Trek star showed.

Maybe it's too much to hope for, but a movement among automakers to standardize on a few basic features that all new cars will have in the same place that work in the same way would be welcome, at least by drivers who are no longer young enough to learn completely different operating systems each time they buy a new car.  At the very least, the car companies should view all software and hardware innovations with a mind to safety first, lest we have more potentially fatal problems such as the default-to-neutral gearshift. 

As for me, I'm going to hang on to my old vehicles till the wheels fall off, or maybe just before.

Sources:  Robert Duffer's article entitled "Five worst new car features reinvent the wheel for no reason" appeared on the Chicago Tribune website on Apr. 17, 2017 at http://www.chicagotribune.com/classified/automotive/sc-worst-new-car-features-autocover-0413-story.html.  I also referred to the Wikipedia article on the Model T.

Monday, April 24, 2017

Earth Deserves Better Than TV Coverage of Climate Change


As I write this, a day after Earth Day 2017, the memory of hundreds of "Marches for Science" and in particular, a CNN report on climate change makes me wonder whether the medium of television is more harmful than helpful in bringing the attention of the general public to complex issues of public interest.  These thoughts are stimulated by an online article and video clip of the report, which featured an exchange between famed popularizer of science Bill Nye the Science Guy, and a man I have seen in person and exchanged emails with, one William Happer, a longtime Princeton physicist who thinks concerns about climate change are, to put it mildly, overblown.

An otherwise uninformed observer of the exchange saw two older men, Nye wearing a bright-red bow tie and Happer dressed in muted grays, in two panels of a four-screen split that included CNN anchors and a representative of an environmental group.  Nye was clearly upset at Happer's mild-toned assertions that carbon dioxide is something each of us produces two pounds of a day just by breathing, and to treat it as a pollutant is going too far.  What really got Nye going was when Happer compared the Paris climate accords recently signed by the Obama administration to Neville Chamberlain's appeasement of Hitler's Germany prior to World War II.  This one stunned even the anchors, who asked Happer to repeat himself, and he explained that the parallel was that neither agreement was going to achieve its stated aim.  Chamberlain failed to stop Germany from grabbing more territory in moves that led directly to World War II, and according to Happer, the Paris accords won't do anything significant to slow down climate change.

What media experts call the "visuals" were all in favor of Nye, a practiced TV performer who brought the right amount of passion to be convincing without yelling or seeming to lose his cool.  But if you look at the academic qualifications of these two parties, you might begin to change your mind.  Mr. Nye's highest formal degree is a B. S. in mechanical engineering, after which he started doing amateur comedy routines and developed the on-air personality for which he is now famous.  William Happer holds a Ph. D. in atomic physics from Princeton and is the Cyrus Fogg Bracket Professor of Physics at that institution.

As encouraging as the Paris agreement was to many who believe that the only moral thing to do with regard to climate change is to stop burning fossil fuels yesterday and undertake a massive retooling to renewable energy, hardly any of its terms are binding on the parties involved.  Like many other such agreements, it consists of hopeful statements of intentions, but if history is any guide, the only countries that will fulfill their obligations under the agreement are ones that were headed in that direction anyway. 

As University of Oxford professor of energy policy Dieter Helm points out in his book The Carbon Crunch, looking to international agreements as an effective means of lowering carbon emissions is probably a fool's errand.  Many European countries are currently outsourcing carbon-intensive industries such as steelmaking and heavy manufacturing to places like India and China, and so Europe can show a net reduction in carbon footprints that is happening not only because of high-minded dedication to the environment, but because of changes in the makeup of their economies toward services and high-tech businesses that simply don't need as much energy. 

As for China and India, the future growth of their economies depends vitally on fossil fuels for the foreseeable future.  They are not about to put the economic brakes on developments that have led millions of their people out of rural subsistence-farming poverty to improved lives in manufacturing-intensive towns and cities.  The Paris agreement may look good on paper, but according to Helm, the chances of any significant dent being made in the world's carbon production by such an agreement roughly equal a snowball's chances in Hades (my metaphor, not his).

Since Helm has made his professional career out of taking global warming seriously, and  spends the rest of the book describing real-world near-term solutions to the problem of fossil-fuel emissions, I think we can count him as a credible witness.  And his conclusion is, leaving Hitler aside, that Happer's opinion on the effects of the Paris agreement is probably closer to the mark than Nye's.

When I sat down to write this blog, I was all set to denounce the politicization of science, and then I thought of another book I read recently:  The Pope of Science, a biography of the famed Italian physicist Enrico Fermi.  Fermi was a scientist's scientist, in that he lived, breathed, and slept science, taking little or no interest in politics and dealing with it only when it directly affected his livelihood (as when he and his partly-Jewish wife decided to flee Fascist Italy as it turned toward Hitler's Germany in its anti-Semitism), or when politics made it necessary to pursue a particular line of inquiry so that the Germans wouldn't make a nuclear weapon before the Allies did and take over the world.  For that reason, Fermi willingly led a team funded by the U. S. government to build the world's first nuclear reactor in 1942, which was a necessary step in the development of nuclear weapons.  But once the war was over, he was glad to get back to basic physics, for the most part.

The fact is, science has always been political to some degree, going all the way back to Francis Bacon, who took what passed for science in the 1500s and put it to work for the betterment of mankind.  Some scientists who worked on the nuclear bomb opposed its use in war, and some scientists today, such as Happer, criticize the plans for gigantic economic disruptions that would take place if the Bill Nyes of the world became dictators of our industrial and economic policies.  At least today, the debates are carried out in the open on widely accessible media.  It's hard to believe, but the entire nuclear-weapon development program in World War II was carried out in near-total secrecy, in a fashion that would get witheringly criticized in view of today's standards of open debate about major publicly-funded projects.  And the outcome, namely nuclear weaponry, has posed a moral quandary ever since. 

But the Nye-Happer confrontation is a reminder that visuals can be deceptive, and there is always more to be learned about a technical subject than you see on TV.

Sources:  The CNN report and video of the Nye-Happer exchange can be viewed at http://www.slate.com/blogs/the_slatest/2017/04/22/watch_bill_nye_blast_cnn_on_air_for_pitting_him_against_climate_change_skeptic.html.  I also referred to Wikipedia articles on Bill Nye, William Happer, and Enrico Fermi.  Dieter Helm's The Carbon Crunch:  How We're Getting Climate Change Wrong—and How To Fix It was published in 2012 by Yale University Press.  The Pope of Physics by Gino Segré and Bettina Hoerlin was published in 2016 by Henry Holt & Co.  I blogged on my encounter with William Happer and the dissing of his talk by a gathering of otherwise well-behaved scientists on Oct. 7, 2013 in "When Scientists Aren't Scientists."

Monday, April 17, 2017

A Modest Proposal to Save California $200 Million


In order to forestall a lot of hate mail, the following blog is written in the tradition of eighteenth-century satirist Jonathan Swift's essay, "A Modest Proposal."  It is not meant to be taken seriously. 

With that out of the way, I have a modest proposal to save the good citizens of California over $200 million.  That is the estimated cost of a suicide-deterrent net project that is going to be installed on the Golden Gate Bridge, according to a recent article in the San Francisco Examiner. 

Opened in 1937, the Golden Gate Bridge is as iconic a symbol of San Francisco as the Eiffel Tower is of Paris.  But shortly after it opened, its builders found that they had constructed what lawyers call an attractive hazard, like an unfenced swimming pool in a neighborhood full of small children.  The 240-foot plunge from the sidewalk of the bridge to the deep waters below began to draw despondent individuals of all kinds, who generally did not survive their high dives.  According to the Examiner, 1,558 people have committed suicide by diving from the bridge, an average of about one person every two or three weeks.  And this is despite intensive efforts of patrolmen specially trained to spot depressed-looking loners who evince an unhealthy-looking interest in the view from the sidewalk. 

So at long last, after numerous engineering proposals and at least one squabble over who should have won the bid, engineers plan to install a wire-rope mesh on the bridge, about 20 feet below the level of the sidewalk and extending 20 feet out on both sides.  The rendering posted online makes it look fairly unobtrusive, but it will inevitably change the appearance of the bridge, sort of like putting fishnet hose on the legs of a beautiful woman.  (Sometimes it helps, but generally it just looks tacky.)  So if you've never seen the bridge in its present netless state, you'd better go look fast, because soon they will put up temporary fences along the sidewalks to keep people from throwing things at the construction workers below.  This last measure is a consequence of sad experience—pedestrians evidently not only can't be trusted to stay on the sidewalk, they can't keep their potential missiles to themselves either.

And now for the modest proposal.  Last June, California enacted an assisted-suicide law.  It is now legal in that state to plan and execute your own death and funeral, and people have already started taking advantage of this law.  We now see the interesting spectacle of Californians on the one hand spending $200 million to stop a few dozen people a year from doing themselves in, and on the other hand, encouraging people who really want to do themselves in to go ahead and do it. 

For $200 million, a lot of people contemplating suicide in California could have an all-expense paid trip from wherever they live to San Francisco.  Those with debilitating diseases could take ambulance rides, and even they might manage to live it up overnight in the garden of nightlife delights for which San Francisco is famous.  Then, with all good-byes said, the person could be assisted out onto the sidewalk and take the time-honored way out that more than 1500 of their fellow citizens have chosen over the years.  And of course, we wouldn't want any ugly fish-net suicide deterrent to get in the way, so there's where you'd save $200 million.

I don't expect anybody to jump at this idea (so to speak), except to say how tacky I am to conflate the people who jump off the Golden Gate Bridge with the people who take advantage of the new assisted-suicide law.  But the point I'm trying to make with my proposal is this:  is suicide okay, or is it wrong?  Or does the answer depend entirely on the convenience of the professionals involved?

It's beginning to look like the latter is the best available answer, at least where California is concerned. 

Take doctors who want to put some of their suffering patients out of their misery, but are worried that someone will find out and charge them with murder.  Solution:  pass an assisted-suicide law that makes it legal.  Now the docs don't have to worry about murder charges. 

Take first responders who, after someone takes their last high dive from the bridge, have the disagreeable task of conducting a search, possibly in bad weather, and fishing out said diver from the drink.  It's expensive, dangerous, and bad publicity besides.  So once the nets are in place, people who are determined to jump will either find another bridge, or if they're really determined, they'll jump down onto the net and, using the exercise-honed athletic skills that many Californians take pride in, they will crawl to the edge of the net and finish the job.  You heard it here first.  Notice the designers don't call it a suicide-prevention device, just a suicide deterrent.  So even $200 million isn't going to reduce the number of suicides from the bridge to zero, and the authorities implicitly admit that.

All satire aside, I think doing something more to keep people from jumping off bridges to their deaths is a good idea.  And maybe the giant stainless-steel nets on either side of the Golden Gate Bridge are the best way to do it, although the price tag gives me pause.  The expenditure of so much money on suicide prevention, on the one hand, and the passage of a law saying that basically it's okay to off yourself, on the other hand, reveals a deep split or inconsistency in attitudes toward suicide in our most populous state. 

This nation's founders allowed for differences in belief on the part of its citizens.  But for most of the history of the United States, there was a general consensus, based upon mostly religious tenets, that suicide, assisted or not, had no redeeming social value and was to be discouraged in law and in engineering (and in medicine, too).  As evinced by the assisted-suicide law in California, this consensus has broken down, at least in that part of the country.  And that's a sad thing, for both those who stand on the sidewalk of a bridge thinking about jumping, and those who lie in a nursing home thinking about hastening their own end. 

Sources:  The San Francisco Examiner website carried the article "Construction to begin on Golden Gate Bridge suicide deterrent system," on Apr. 13, 2017 at http://www.sfexaminer.com/construction-begin-golden-gate-bridge-suicide-deterrent-system/.  A rendering of what the system may look like once installed can be viewed at
http://www.ggbsuicidebarrier.org/images/suicide-deterrent-rendering-looking-north.jpg.

Monday, April 10, 2017

What Really Happened With Internet Privacy?


Anyone paying attention to U. S. headlines recently heard something about internet privacy.  But what you heard probably depends on where you heard it.  President Trump signed a bill on Monday, Apr. 3 that used a thing called the Congressional Review Act to reverse a pending FCC rule.  So whatever it was, the rule that was revoked hadn't even gone into effect yet.

If it hadn't been shot down, the FCC's proposed rule would have required internet service providers (ISPs) such as AT&T to request permission from their customers to use certain data about what the customers do online.  Right now, ISPs don't have to ask, but depending on the ISP, they may not be doing much with that data anyway.  The big users of customer-generated data are social-media outlets such as Facebook, Internet companies such as Google, and advertisers who pay these outfits to place targeted ads using harvested customer data.  I'm sure the ISPs would like to get into that business eventually, but the FCC rule would have blocked them.  President Trump and the Republican-dominated Congress simply removed that stumbling block.

So for one thing, nobody lost any internet privacy they previously had.  As to the hypothetical future, it's anybody's guess what the FCC rule might have done, but clearly the ISPs were not happy about it, which was how the rule got quashed by a corporate-friendly Congress and President.

How you feel about this may depend on what you think about internet privacy and corporate freedom.  At this point in history, the phrase "Internet privacy" is about as meaningful as "Trump modesty."  Both are in short supply.  Most people who spend any time at all on the web have turned from looking for electric toothbrushes online, say, to researching the versions of ancient Mayan calendars, only to have an ad for toothbrushes pop up in the middle of the British Museum's webpage.  Obviously, a combination of "cookies" (little browser things that tell servers where your web browser has been) and clever marketing schemes has engineered that outcome.  All the FCC rule might have done would have been to stop ISPs such as AT&T and Verizon from doing similar things, at least without asking first.   And the asking could have been buried in one of those novel-length terms-and-conditions documents that everybody must either lie about reading before signing onto a new service, or actually read (and I don't know anybody who reads them).  The only reason that the FCC could have passed the rule in the first place lies in the historical carve-outs of which Federal agency gets to regulate what electronic communications means.  A similar historical fluke explains why on-the-air TV shows are not quite as raunchy as cable shows:  the FCC gets to regulate on-air stuff, but not cable-only stuff.

So what has been portrayed in some circles as an epic loss of consumer protection turns out to be more of a turf battle among giant powerful Federal agencies and giant corporations, and the consumer just gets to watch the results from the sidelines. 

Even though the actual effect of either the FCC ruling or its revocation by Congress and the President might have been minimal, it's worth asking a broader question about how consumers—or citizens, to use a more general term—are faring with respect to the centers of power in the U. S.  I recently ran across a blog by a man who, back in May of 2016 before the party conventions had selected either Presidential candidate, predicted that Trump would not only be the Republican nominee, but that he'd win too.  Anybody can make a lucky guess, but this gentleman, a writer by the name of John C. Médaille, based his prediction on the fact that ordinary Americans were enraged that their interests have been ignored in favor of the interests of "the Rich, the powerful, the banker, the foreigner."  Of course, our current President belongs to at least two of those categories himself, and Médaille was far from pleased that Trump was probably going to win.  But he was right.

Powerful corporations such as Google and Facebook are able to offer "free" services that compel users to generate content that profits the companies.  Médaille, who believes in an obscure and mostly forgotten system of economics called distributism, sees this sort of thing as an injustice, which brings the matter into the scope of engineering ethics.  Because engineering, broadly speaking, makes everything on the Internet possible, engineers who work for such companies shouldn't simply turn a blind eye to the applications of their code, saying, "All they pay me to do is code.  What they do with the code isn't my business."  Google's code of conduct, summed up in the phrase "Don't be evil," is a masterful exercise in question-begging, namely because at least to my knowledge, it doesn't include a definition of "evil." 

And by the nature of human relations, we can never set out a precisely-written code of conduct that a robot could follow flawlessly, because we're not robots.  We're human beings, each of us a mystical world unto ourselves, and relations among such beings cannot be reduced to mathematical formulas. 

The kerfuffle about the proposed FCC ruling shows that, although our current President ran as the vindicator of the common man and woman, reality may be setting in rather faster than anyone expected—reality being the continuation of a long-term trend of concentration of both economic and political power in the hands of an oligarchic few.  By the nature of modern engineering, most engineers will end up working for medium-size to large corporations, and therefore have a perhaps unconscious bias in favor of policies and actions that favor such corporations. 

However, there are reasons that millions of people in the U. S. have experienced stagnating wages, worsening work conditions, and a lack of genuine opportunities to be a free contributor to the common wealth.  Instead, unless you have reached a certain educational level, your options are nearly all of the "heads we win, tails you lose" variety, and many men in particular have taken the easy way out of simply giving up on work and living off the meager surpluses of welfare and compliant relatives and girlfriends that are available. 

To reverse such trends will take more than an internecine government flap.  It will take first, awareness of the depth and scope of the problem, and second, a willingness to overlook differences and artificial divisions set up by those hoping to keep the masses tranquil, and to do something in a united way that will bring about meaningful change.  But that is a topic for another time.

Sources:  I used material from The Hill's website posted on Apr. 3, 2017 at http://thehill.com/homenews/administration/327107-trump-signs-internet-privacy-repeal., entitled "Trump signs Internet privacy repeal."  That article referred to a blog by a person described as "AT&T's top lobbyist" Bob Quinn at https://www.attpublicpolicy.com/privacy/reversing-obamas-fcc-regulations-a-path-to-consumer-friendly-privacy-protections/, which I also referred to.  John C. Médaille's prediction of Trump's triumph and his mixed feelings about it can be read at http://distributistreview.com/cassandra-calls-election/.  Another blog of mine on distributism can be found at http://engineeringethicsblog.blogspot.com/2008/09/what-is-distributism-and-why-should.html.