Monday, December 31, 2018

Design Flaw Identified in FIU Bridge Collapse


Back on Mar. 15 of this year, a new pedestrian bridge across a busy highway running through the Florida International University campus suddenly collapsed, killing six people and injuring eight more.  The bridge was fabricated as a single long concrete truss consisting of upper and lower decks connected by a series of diagonal and vertical struts.  Trusses are familiar elements of steel-bridge construction, but there are special design issues involved in making a truss out of concrete.  And according to an update issued by the U. S. National Transportation Safety Board (NTSB) on Nov. 15, it looks like someone may have made a fatal error in part of the design.

When we blogged on this accident back in March, it was already known that some cracks had shown up at the north end where the northernmost vertical member and the adjacent diagonal strut went into the bottom deck.  At the time, the construction supervisors held a meeting about the cracks, but the NTSB has successfully prevented publication of the meeting minutes before their final report on the accident can be issued, which probably won't be till some time next year.  The Miami Herald reports that after the meeting, a construction worker was sent out to tighten tension rods inside the diagonal strut.  This worker appears to be the one who died when the bridge collapsed.

The modern civil engineer has abundant design resources at his or her disposal:  computer-aided modeling and stress calculations, three-dimensional visualization and planning tools, and other computational aids that take a lot of the former drudgework out of mechanical and civil engineering design.  Such aids have made possible many recent designs that would have been difficult or impossible to create using the old manual slide-rule and design-table approaches. 

But even with all the computer assistance in the world, the information about a given design has to be understood and checked by human beings.  That is why most public civil engineering projects must have their designs approved by a registered professional engineer (PE), whose stamp or signature appears on the drawings.  That stamp puts the reputation of the engineer on the line:  it is a guarantee that the design will do what it's intended to do. 

Long chains of reasoning and responsibility lie behind every decision to approve a set of drawings.  Those chains may pass from person to person, or from computer output to person.  Computer-aided calculations answer such questions as, "If this particular junction of a strut and a vertical member is under that kind of stress, will it be able to withstand the stress with a reasonable margin of safety?"  Given that the inputs to tried and tested software are correct, the software should give the correct answer, assuming that the person using the software knows how to use it and interpret the results correctly.  Furthermore, the chain of engineering integrity requires that when the PE responsible for the overall design, the person whose stamp of approval appears on the plans, asks underlings if this or that part of the design is good, the underlings must give an honest answer.  And the PE must trust that answer, or rather, the persons answering for the integrity of the plans.

In any human organization, there is always the possibility of error.  Sometimes errors can be traced to a particular person, and sometimes they can't.  The NTSB has made sure that all available sample materials from the wreckage of the FIU bridge were tested to see whether they met the minimum specified strength and other standards.  And so far the results are all positive, so it doesn't seem that the collapse can be based on defective materials. 

The death or injury of bystanders in a bridge collapse is a tragedy regardless of whether the accident could have been prevented or not.  But if a design flaw really is the reason for the collapse, it will be ironic that the design, which has been termed "unorthodox" in the Herald report, was before its installation a point of pride for FIU's civil engineering program, which specializes in accelerated bridge construction of the type that was used on this bridge. 

Back when universities were smaller and more personal institutions, engineering faculty members would sometimes contribute their professional expertise to campus projects, helping in the design of new buildings or consulting professionally with regard to campus technical issues.  The FIU civil engineering professors do not appear to have been personally involved in this particular design, however, other than to give their informal approval of the general approach and construction methods.  In fairness, many bridges have been successfully built using on-site accelerated bridge construction, which does not appear to be implicated in the collapse.  But in this case, it might have been a good idea to have qualified faculty members go over the plans, and they might have caught any errors that contributed to the collapse.

However, that is not the way most universities operate these days.  Each professor has his or her own irons in the research and teaching fires that are lit under them, and to ask one of them to stop what they're doing and check some plans for a new building or bridge would be regarded as an unfair imposition on their time, and rightly so.  They might reply that there are professionals being paid to do that, and they would be correct.

But when professionals are paid to do a job, it's up to them to do it right.  According to the latest update from the NTSB, someone (or possibly something, if we include computers) failed in that responsibility.  And physical objects are not forgiving.  The warning signs were there:  cracks in the location that subsequently failed.  We hope that the NTSB will use the embargoed meeting report to figure out what went wrong, not only in the original design, but also in the management process that led to the fatal decision to try tensioning the strut without stopping traffic underneath the bridge.  But until the final report on the accident is issued, this accident stands as a reminder to everyone who deals with technology that could kill or injure someone—a reminder that the lives of innocent people depend on how well you do your job.

Sources:  The NTSB update of Nov. 15, 2018 can be found at https://www.ntsb.gov/investigations/AccidentReports/Reports/HWY18MH009-investigative-update2.pdf.  I also referred to the Miami Herald report on the update carried at https://www.miamiherald.com/news/local/community/miami-dade/article221706575.html.  My original blog on this accident at http://engineeringethicsblog.blogspot.com/2018/03/the-fiu-bridge-collapse-more-questions.html had an incorrect date for the accident, which has now been corrected.

Monday, December 24, 2018

The Gatwick Drone Incident: Technology Outpaces Policy


Gatwick Airport is the UK's second busiest flight facility after Heathrow, and last Wednesday, Dec. 19, it was accommodating thousands of holiday travelers.  Around 9 PM, an unmanned aerial vehicle (UAV), commonly known as a drone, was sighted in the airspace dangerously near the airport's single runway.  Just this year, the UK prohibited drone flights within 1 km of airports, and this drone was well within that limit. 

No details are yet available about exactly what kind of drone it was.  But it was large enough (or its lights were bright enough) to be seen at night.  The airport authorities, acting with prudence, ordered a temporary shutdown in the hopes that the drone flight was an isolated mistake that could be dealt with quickly.  Unfortunately, that wasn't the case.  Shortly after flights were resumed, another drone was sighted.  Eventually, observers logged over 50 separate drone sightings, and the airport was shut down for a total of 33 hours before the last drone went away and flights were resumed.  As of Saturday, Paul Gait and Elaine Kirk, a couple living near the airport, were arrested in connection with the incidents, but as of Monday Dec. 24 they had been released without being charged.

Because Gatwick is a key hub in so many airline networks, the shutdown affected over a hundred thousand travelers and sent ripples in the air-transport system around the world for days.  Eventually, the authorities mustered military equipment capable of both locating and shooting down drones, but by that time the threat had ceased.

This incident raises a number of questions about what the proper policies of airports should be about drone sightings, about what regulations drone users and manufacturers should have to deal with, and how we are going to prevent copycat drone incidents like this in the future.  First, the policy question.

It looks like the UK is somewhat behind the U. S. in its regulation of drone technology.  For several years, the U. S. Federal Aviation Administration (FAA) has required registration of ownership of drones (at least those above a certain size and capability), and laws are already in place restricting drone flights above certain altitudes and near airports.  The U. S. has had incidents of drones near airports, but no long-term shutdowns of major airports comparable to Gatwick. 

It's possible that the UK authorities erred on the side of excessive caution in ordering a total shutdown of the airport.  Depending on the size of the drone, they might have opted merely to warn pilots that there was a drone in the vicinity, as there are birds whose weight and consequent hazards to aircraft are comparable to that of a small drone, and it is rare to see airports shut down because of excessive bird flights over the landing areas.  But birds don't carry explosives, and terrorist fears were probably prominent in the decision to play it safe and simply shut down the single runway rather than run the risk of having a plane damaged or destroyed by a bomb-carrying drone.

That being said, what could authorities have done to prevent the drone pilot (or pilots) from flying their UAVs in restricted airspace?  Presently, not much, short of trying to shoot them down.  There is electronic fence technology available, but depending on the radio frequencies used by the drones, attempts simply to jam the frequencies typically used by drones could have severe unintended consequences, even possibly disrupting electronics that are vital to legitimate air operations.  And if the drones were pre-programmed to follow a set flight pattern, they do not even have to be in constant communication with the drone's operator to fly, and therefore jamming might not have done any good.

Going aggressive and trying to shoot the thing down is not that easy.  A drone at a distance of a kilometer or so is a very small target.  If a bullet or rocket misses it, that bullet or rocket is going to come down somewhere, and typically metropolitan airports are not places where you want bullets or rockets coming down at random.  So that's not a realistic option either.

The best long-term solution might be to build in something called "remote ID" that the world's largest drone manufacturer, DJI, suggested in a statement.  Remote ID would be a system whereby all drones would transmit their location, the pilot's location, and an identification code in real time.  If such a system were made mandatory, authorities could simply read the code and run over to where the pilot is and arrest him or her.  It's interesting that the biggest drone maker suggests such a thing, but obviously hasn't included it in their products yet, possibly for cost and performance reasons.  Low-end drones don't have GPS receivers and wouldn't be capable of remote ID, but maybe those types are not the most serious threat to places like Gatwick anyway. 

Even with such ID technology, a determined pilot could keep on the run and stay ahead of the cops long enough to cause serious disruption.  And chasing down more than one drone at a time could be hard.  Because drones can typically stay in the air for only half an hour before their batteries have to be recharged, the number of drone sightings during the Gatwick shutdown leads authorities to believe that several drones and operators were involved. 

The investigation continues, and it will be interesting to discover who did it and why.  In the meantime, the UK has had a rough wake-up call with regard to their policy on drones.  One hopes that they don't overreact with blanket bans on the devices, which are proving to be useful in a wide variety of commercial and amateur applications.  But we can't have major airports getting shut down at the whim of a few people with consumer-grade drones.  So the policy and regulatory environment, especially in the UK, will have to catch up with drone reality on the ground—or rather, in the air—to prevent such incidents in the future.

Monday, December 17, 2018

Has Human Gene Editor Been Edited Himself?


Dr. Jiankue He of the Southern University of Science and Technology in Shenzhen, China, claims to have used a gene-editing technology called CRISPR/Cas9 to edit the genes of twin girls in order to make the babies resistant to the AIDS virus carried by their father.  When news of his experiment leaked out, scientists and governments around the world attacked him for doing what is widely viewed as an unethical experiment.  After Dr. He tried to defend himself at a Human Genome Editing Summit in Hong Kong at the end of November, the president of Dr. He's university reportedly collected him and took him back to Shenzhen, and his whereabouts are presently unknown.  He is no longer answering his phone, his lab has been shut down, a company he founded has lost contact with him, and one report says he has been placed under house arrest. 

First, a little background.  It will be very little because biology and bioengineering is not my forte, to say the least.  CRISPR is an acronym for some DNA sequences that are found widely in cells, and these sequences are used with an enzyme in a technology called CRISPR/Cas9 to edit DNA.  So in the last fifteen years or so, we have gone from reading the human genome (the goal of the Human Genome Project, completed in 2003) to editing the genes of human beings—at least if Dr. He has done what he says he's done.

From a scientific point of view, his claims remain unsubstantiated, because he has not yet published anything about this particular experiment in a peer-reviewed journal.  He apparently intended to do so when news of it leaked out, and Dr. He decided to post information about it to forestall rumors.  What he posted did a lot more than that.

There's enough questionable ethical practices in this incident for several columns.  The most prominent one is whether Dr. He did wrong in deliberately manipulating the gene sequence of human embryos and then implanting them back in the mother to be born.  Nothing has been said about how many unsuccessful tries were made along these lines, but if this experiment was like others, the yield rate was probably very small. 

Besides that question, there is the problem of talking about controversial experiments prior to peer review.  We still don't have any verification as to whether Dr. He really did what he said, although he has a good track record in the field of previous genetics research in less controversial areas.  But given the nature of his situation, Dr. He probably did the least bad thing in releasing more information rather than just letting rumors run wild.

What is most interesting to me is the way the government of China has reacted to the firestorm of controversy.  Up to now, Dr. He has been treated like a golden boy, being allowed to study abroad at Rice and Stanford, receiving a coveted Thousand Talents Award to set up his own lab, and founding or being involved in six companies focused on commercializing aspects of his research.  Earlier this year he announced that he was taking a leave from his university position to concentrate on his commercial activities. 

But once news leaked of his alleged CRISPR/Cas9 experiment with the twins and criticisms began to mount, the weather changed fast.  China currently has no inconvenient encumbrances, such as the legal concept of due process, to delay rapid and decisive action on the part of its government.  So when someone high up in the power structure decided that Dr. He was no longer an asset, his fate was sealed.  It may be months or years before we find out exactly what has happened to him, but for now, his high-flying career appears to be at an end.  What the government gave, the government can take away, and apparently has.

There is an odd parallel here between what the Chinese government has done to Dr. He, and what Dr. He has reportedly done to the twins.  For years, he enjoyed the freedom to study at the best universities in the world, to follow his investigations into the secrets of the genome, and to speculate on commercial applications of his ideas.  But in a matter of weeks, it's been taken away, at least for the time being.

At least Dr. He had the opportunity to judge whether his experiment might land him in hot water.  He may have judged wrong, but he was free to refrain as well as to go ahead.  The twins—referred to in news reports only as Lulu and Nana—have had no choice whatsoever.  From the time they were born, they became participants for life in an experiment that was not of their choosing.  If what Dr. He claims to have done is true, they are the first human beings on Earth whose intrinsic genetic makeup came about not only through the volition of their parents, grandparents, and ancestors stretching back before the dawn of human history, but also through the deliberate mechanical technology of CRISPR/Cas9. 

Is this a tragedy?  A lot of people seem to think so.  Judging from the swiftness of the negative reactions heaped on Dr. He's head, most of them arose from what bioethicist Leon Kass calls the "yuck factor."  Some ideas and actions are just intuitively revolting to most people, and fiddling with a human embryo's genes fall into this category.  Given the magnitude of the opprobrium, the government of China saw a threat to their hoped-for reputation as a leader in rapidly advancing scientific fields such as biotechnology, and removed Dr. He from public (and maybe even private) view.  One researcher going a bit too far is disposable.  But China's long-term plans in this area are not known.

The more basic question raised by this research, and one that has not been addressed much so far in news reports on it, is whether human life is really distinct, set apart, or holy compared to other life.  If it is, then a whole array of things that are now legal and even praised in some circles, ranging from mix-and-match in-vitro fertilization to abortion, are highly questionable, to say the least.  If it isn't—if playing with human genes is no more harmful than what the Jesuit priest Gregor Mendel did to his bean plants to figure out the basics of genetics over a century ago—then I would ask, what's the big deal?  Once you've gotten over the shock of novelty, human gene editing will fade into the background and become just another way we mess with ourselves technologically.  I hope that never becomes the case, but unless we use this controversy to open up a wider inquiry into what the limits of biotechnology should be, I'm afraid we'll look back on Dr. He's case and wonder what all the fuss was about.

Sources:  The Australian Broadcasting Company posted a report about Dr. He's disappearance at https://www.abc.net.au/news/2018-12-07/chinese-scientist-who-edited-twins-genes-he-jiankui-missing/10588528.  I also referred to a report of theirs on the experiment itself at https://www.abc.net.au/news/2018-11-27/china-gene-edited-babies/10556676. 

Monday, December 10, 2018

Microchipping People: Convenience or Concern?


For some years now, we have had radio-frequency identification (RFID) technology available to make transponder chips small enough to be implanted into living beings such as dogs or people.  Almost no one objects to placing an identifying microchip in a pet, which in a legal sense is a piece of property like the sunglasses you might buy at a store.  But some lingering sense of the difference between humans and everything else gives us pause when we start talking about microchipping people. 

That sense hasn't stopped some four thousand Swedes from getting microchip implants, mostly from a startup called Biohax International.  It's interesting that Biohax's founder Jowan Österlund was at one point a professional body piercer, a profession which itself couldn't exist unless a segment of the population had already let down its guard somewhat concerning the idea of affixing pieces of metal to one's person. 

According to an NPR report, Swedes have high levels of trust for institutions such as their government, banks, railroad companies, and other organizations.  And microchipped Swedes are now able to use their implanted microchips instead of train tickets or credit cards for transportation, and can simply wave an implanted hand at a door-lock sensor instead of fumbling in a wallet for a pass card. 

A report in the Economist last summer mentioned something that often comes up in U. S. discussions of personal microchips:  a passage in the New Testament Book of Revelation about "the mark of the beast."  When the reporter asked Österlund about this concern, his reply was dismissive:  "people once thought the Beatles were the Antichrist."

Leaving such eschatalogical concerns aside for the moment, what are the other potential downsides of either voluntary or compulsory personal microchipping?   First, there is a privacy concern.  The memory capacity on such chips will only increase in the future.  Depending on what sorts of data are stored on the chip, for example medical information, you could inadvertently allow strangers to access your most intimate medical secrets.  With a wallet card, you can always refuse to show it to somebody or even keep it in a shielded enclosure to prevent unauthorized readings.  But if an RFID chip is implanted in the web of skin between your right thumb and forefinger (a typical location), the only way to prevent unauthorized access for sure seems to be wearing foil-lined gloves all the time. 

And there is another concern which is hard to express, but I'll try.  A person's identity cannot be realized in isolation.  That is to say, who we are is formed in the process of relating to other people.  I hold an appointment as a full professor at Texas State University.  But if somebody picked me up and dropped me off by myself on a desert island, my status as a full professor would become effectively void, because I would no longer be among the people who recognize me as such.  And so the ways by which we are recognized influences our own ideas about who we are.

We are already pretty far down the road I'm trying to describe, in that we are used to identifying ourselves by numbers, passwords demanded by all sorts of online systems, and by other impersonal means such as swipe cards and even biometric sensors.  In ways that are hard to quantify or even detect, but which I am convinced are nonetheless real, these impersonal or mechanical means of identifying ourselves do things to our self-concept—things that I am convinced are not that helpful.  But at least with passwords and biometric ID methods and wallet cards, these are all things that leave my bodily integrity alone. 

With a microchip, that bodily integrity is breached.  Now an actual physical part of myself, a foreign body, has become an essential part of my public identity.  And make no mistake, once people find out (and the technology allows) that one little implanted microchip can replace a fistful of wallet cards and a brain full of memorized passwords, they will become very popular, as many Swedes have already discovered.  And as night follows day, those chips will themselves become things of value—more valuable in some cases than the persons harboring them.  I am unaware that anyone has yet tried to extract another person's microchip under duress, but sooner or later, you can be sure it will happen, leaving the victim with a bloody hand and the thief with the victim's identity, at least until the victim can call a hotline and report that his microchip was stolen.  And Biohax had better start putting such a hotline system in place soon, if they haven't already.

I'll save my thoughts on the mark of the beast for last.  Christians who take the New Testament seriously, as God's word revealed to man, are nevertheless puzzled by the last book of the Bible.  Revelation is an example of a type of writing called apocalyptic literature (the Greek word for the book is "apocalypse") that was popular around the first and second centuries A. D.  It is highly symbolic, and unfortunately the keys to much of the symbolism have been lost.  So no one knows for sure who the two beasts are of Rev. 13, in which we are told that the second of the two beasts will require everyone who wishes to buy or sell anything to receive a "mark" on their hand or forehead. 

This is bad news for them, because in the next chapter we hear from an angel who says, "If any one worships the beast and its image, and receives a mark on his forehead or on his hand, he also shall drink the wine of God's wrath," and it goes downhill from there, all the way to torment with fire and sulphur.   This explains the almost automatic and sometimes hysterical opposition from some Christian groups to any hint of a compulsory identification program that leaves marks or other things on one's body. 

I respect these concerns to the extent that I do not personally wish to have a microchip installed in my person.  But I don't necessarily agree with those who tell microchipped people that they're bound to be playing with fire.

Sources:  The National Public Radio report on Swedish microchipping appeared on the NPR website on Oct. 22, 2018 at https://www.npr.org/2018/10/22/658808705/thousands-of-swedes-are-inserting-microchips-under-their-skin.  I also referred to The Economist website, specifically an article carried on Aug. 2, 2018 at https://www.economist.com/europe/2018/08/02/why-swedes-are-inserting-microchips-into-their-bodies. 

Monday, December 03, 2018

Marriott's Data Breach: Not In Our Line of Work


Back when I attended Cornell for my master's degree, I learned that one of the stronger academic programs on campus was what is now called the Cornell School of Hotel Administration.  There was even an actual hotel on campus run by undergrads in the program, and reportedly (I never stayed there) it was one of the best hotels in Ithaca, and quite reasonably priced.  But this was back in the days when guests registered by signing a physical registration blank, which was filed in a file cabinet.  Advance registrations were made by phone or letter, although faxes were just beginning to be used in 1976. 
In order to steal a guest's registration information, a thief would have to break into the hotel office (which was staffed 24/7, meaning it would have to be robbery, not burglary) and carry off piles of paper.  And even if he did, the only records he'd get would be the ones from that particular hotel.

Fast forward to last Friday, Nov. 30, when Marriott, the largest hotel chain in the world, announced that their Starwood chain, purchased in 2016, had suffered one of the largest data breaches on record, beginning in 2014 and affecting possibly some 500 million customers worldwide.  Besides the usual name, address, phone number, and email info, this breach may also have compromised passport and credit card numbers, although the latter were encrypted.  Today's sophisticated cybercriminals have shown that de-encryption is not beyond their capabilities, however.  Details of the breach are still sketchy, as the news release from Marriott indicated only that an unauthorized party copied and encrypted information within their system and "took steps toward removing it," although whether it was actually stolen is not clear from the announcement.  Nevertheless, the possibility exists, and this knowledge is less than comforting to the millions of Starwood guests whose personal data may have been stolen.

It used to be that running a hotel, or even a hotel chain, didn't require you to be a world-class information technology expert.  But hotels eventually saw the advantages of centralizing their electronic records so that no matter where their guests travel, the same information is available and discounts and other favored-customer perks can be applied instantly all around the globe.  The same overwhelming network advantages that often transform a slight numerical superiority in a network situation into a practical monopoly apply also to hotels as well as to telecomm companies, Internet providers, and other network-intensive businesses.  And such concentrations of data are attractive to sophisticated cybercriminals who aren't going to waste their time on independent mom-and-pop businesses when the same amount of hacking effort can be rewarded with the personal records of 500 million people.

Human systems and organizations respond slower than the Internet to change, and I can't help but wonder whether part of the fault for the Marriott data breach lies with management of the Starwood organization, who may have been very good hoteliers, but less than competent IT managers.  It's too early to draw any conclusions, of course, but an interesting comparison can be drawn between hotel-running and banking, say. 

Banks were into computers and their predecessors, IBM punch-card business machines and weird giant-typewriter-looking things called posting machines, back when the fanciest information technology you were likely to find in a hotel was the accountant's adding machine.  As the advantages of computerized banking became clear for purposes of check clearing, banks led the way in developing machine-readable checks and methods of securely sending financial data from place to place.  The spread of automated teller machines (ATMs) in the 1980s taught banks how to put secure networks in places where there was no actual bank, just an ATM.  Having been used to thinking about the possibility of theft constantly as a part of their business, banks naturally built up the security functions of their digital operations along with the operations themselves.  Their systems are by no means perfect, but even when data is stolen, they have devised rapid and effective methods to detect data breaches and to put a stop to their effects.  For example, if someone steals your credit card number, the credit-card issuer uses sophisticated buying-pattern software to raise a flag and check with you within hours to see whether illegitimate charges were made. 

While hotel people have long dealt with thefts of personal property from rooms, the notion that digital information garnered from customers can itself be more valuable than anything that guests carry on their persons is a novel one to the hotel students who were attending Cornell when I was there, at any rate.  And while I'm sure that Cornell's current hotel administration curriculum includes something about IT management, I suspect it's a recent innovation, and almost certainly wasn't taught forty years ago.  So it's not surprising that a type of business that historically wasn't that involved in digital systems turns out to be especially vulnerable to modern-day cybercriminals. 

It's still not clear whether any Starwood customer information was actually used illegally, but such questions take time to answer.  That hasn't stopped some lawyers from filing a national class-action lawsuit against Marriott.  Both the lawyers and the cybercrooks are taking advantage of the fact that the Starwood chain tends to attract upscale customers who both have lots of money and connections worth stealing, and who are more likely to support a class-action lawsuit for that reason.  If your humble scribe has stayed at a hotel in the Starwood chain, I don't remember it, as my taste runs more to Best Western or LaQuinta.

Still, for the sakes of the 500 million people affected, I hope this incident turns out to be less serious than it appears to be now.  And I bet that the IT management course at the Cornell School of Hotel Administration will cover the famous 2018 Marriott data breach as a case study in the future.

Sources:  I referred to reports on the data breach carried by NBC News at https://www.nbcnews.com/tech/security/marriott-says-data-breach-compromised-info-500-million-guests-n942041 and the Hawaiian paper the Star Advertiser at http://www.staradvertiser.com/2018/12/01/breaking-news/national-class-action-lawsuit-filed-over-marriott-data-breach/.

Monday, November 26, 2018

The Climate Change Report: Danger or Opportunity?


Last Friday, a day when many Americans are thinking of shopping rather than climate change, the Fourth National Climate Assessment was released by the U. S. government.  A massive 1600-page document, it reportedly goes into great detail about how projected increases in average temperatures are going to affect the U. S., especially the economy.  I have read only the twelve summary statements at the beginning of the report, but those are pessimistic enough.  Floods, storms, and rising temperatures will threaten to overwhelm our already crumbling infrastructure of drainage systems, water supplies, power grids, and roads.  Agricultural policies and practices that have worked in the past will fail to keep up with changes in crop viabilities worldwide.  The "trillion-dollar coastal property market" will be threatened with collapse, and, well, things are going to go you-know-where in a handbasket, generally speaking.

This is not an alarmist tome.  A lot of serious professionals have done a lot of work to compile evidence-based predictions that have focused not just on gee-whiz sentimental issues such as polar bears (not that I have anything against polar bears), but on bread-and-butter issues like economic and infrastructure problems that will probably get worse.  Given the present climate (so to speak) in Washington, this was a clever strategy on the part of the report's organizers.  If money men are in power, talk about money to get their attention.  Whether the report will inspire the results the writers want to get is another question.

Most people admit there is more carbon dioxide and other greenhouse gases in the atmosphere than there used to be, and that this increase will lead to some amount of rise in the average global temperature.  The hard part of this topic is to decide what to do about it.  From my superficial skimming of the report's summary, I glean that its recommendations fall into two categories.

One is to cut down greenhouse-gas emissions.  This is the hardest bullet to bite.  The global economy presently runs largely on fossil fuels, and the green fantasy of a zero-carbon-emission economy is just that—a fantasy.  I'm not saying it will never happen, but to achieve it even in a long lifetime from now would require a global dictatorship that would make Cambodia's Pol Pot look like Mr. Rogers.  Add to that the fact that greenhouse gases don't stay where they are emitted, but eagerly mix into the global atmosphere, and you have the world's largest tragedy of the commons—it's in every nation's general interest to reduce greenhouse gas emissions, but it's in every nation's specific interest to get everybody else to do it while you yourself keep burning coal, oil, and gas.  Given the practical realities of international politics, it begins to look like the wisest course for an individual country is to plan for the worst-case warming scenario defensively, while doing no more than your fair share to cut back emissions.

And that's where another word, "adaptation," becomes prominent in the report's twelve summaries of findings, in the second category of recommendations.  Here's where engineers can make a difference that is pretty uncontroversial.  Are floods going to be predictably more severe?  Improve flood-abatement planning and design so that even the new worst-case flood doesn't kill as many people or damage as much property.  Are tides going to be higher on the coasts?  There are millions of opportunities to do something about that in every stretch of coastline, and most of them involve spending money on engineering projects.  I'm not saying that engineering firms and engineers should profit by the harm that global warming might otherwise cause.  But most large-scale public engineering works—utility and transportation networks, for instance—already involve forecasting and planning.  Climate change, to the extent that it is predictable, must factor into those plans, and can even motivate new or replacement construction as an added incentive to do something, rather than just letting the old infrastructure continue to crumble while fighting crisis fires as they arise.

Broadly speaking, the profession of engineering bears some responsibility for all that carbon dioxide in the air.  Modern society as a whole made the decision to use first steam power, then electricity and fossil-fueled transportation, but none of that could have happened without engineers.  It is only fitting that engineers will help us deal with the consequences of higher levels of greenhouse gases, whatever those consequences may be. 

The chief danger I see in all the rush to do something about climate change is not technical, but political.  As the English philosopher and political scientist Edmund Burke noted in his 1790 work Reflections on the Revolution in France, institutions are complicated and delicate things.  No one completely understands how a national economy or a national government works.  So it is the better part of wisdom to go slowly when attempting to remedy an ill.  Radical and untried measures such as draconian carbon taxes could trigger a global economic depression that could be more harmful than the climate change it was intended to fight. 

There are those who seem to think that the world's worst existential threat is climate change, and who have the revolutionary attitude that any action is justified by such a threat, including moving toward a global type of European-Union-style government that would systematically implement controls on fossil fuels and energy use.  Burke would caution against any such move.  While it might achieve its intended technical goal of reducing climate change, the price in loss of national sovereignty and the evils that a truly effective world government might do would not be worth paying, in my estimation.

So if you have nothing better to do over the Christmas holidays, curl up with your tablet and the 1600-page Fourth National Climate Assessment and become the best-informed person you know about climate change.  As for me, I've got some Christmas shopping to do instead.

Sources:  I referred to a BBC report carried on Nov. 24, 2018 at  https://www.bbc.com/news/world-us-canada-46325168, and one at Science Magazine at  https://www.sciencemag.org/news/2018/11/climate-change-poses-major-threat-us-new-government-report-concludes.  The report itself can be accessed at https://nca2018.globalchange.gov/.

Monday, November 19, 2018

Away With All 32-Bit Mac Software


There is a totalitarian frame of mind that favors what I would call routinely hyperbolic language.  Some years ago, I read a book that was published during the Great Cultural Revolution in the Peoples' Republic of China from 1966 to 1976.  It described the work of an English doctor who had defected to China and cooperated fully with the regime's propaganda machine.  The actual good he did medically, which was considerable, is not the point.  But the title of the book was classic totalitarian-speak:  Away With All Pests. 

Apple is not the PRC, but their attitude toward individuals and smaller companies in their orbit of influence is, shall we say, hardly cooperative and democratic in all cases.  Take, for example, the situation of a small or medium-size software developer, for whom a total rewrite of their software product is a crippling and possibly prohibitive undertaking.  Now, I'm not a computer scientist, and so some of what I say may be speculative or even wrong.  But from what I can tell, taking an application written for a 32-bit operating system (which was the typical PC and Mac system until about 2004, when Intel introduced their 64-bit processor), and rewriting it for a 64-bit OS is a big deal, and presents all sorts of backward-compatibility and other issues that may be insurmountable in some cases.  So it's understandable that many software firms simply haven't bothered yet.

Well, along comes Apple last June and announces that the next OS upgrade—OS 10.14, called Mojave—will be the last version to support 32-bit software at all.  High Sierra—the one my fairly new Mac runs—will tolerate 32-bit stuff, but it's the last one that will do so without problems.  So what that means is, if I have to upgrade my OS beyond what I have now, I risk losing, and am eventually certain to lose, all my 32-bit software.

Up until 2016, that included near-vital things like Microsoft Office.  Microsoft finally got in gear and issued 64-bit versions of Office for Mac, but those of you clinging to the old friendly version of Excel that I used to like so much are going to be out of luck when 32-bit becomes anathema. 

Personally, I stand to lose three different apps that are specialized enough that the supporting companies are fairly small, or in one case is just open-source freeware hosted by a government agency.  I have no idea whether these organizations are going to offer 64-bit versions in time for me to keep using them when the dark day comes that I kiss High Sierra good-by and grit my teeth and get the new 64-bit-only OS.  But if experience is any guide, I'll lose some valuable software, and the ability to work with its legacy files, in the process.  The last time this happened I lost an expensive video editing application and all its video files—toast, after only three or four years.

Of course, if I had the attitude prescribed by our fearless Apple corporate leaders, I would not harbor such traitorous thoughts as the notion that Apple can do anything wrong, and think that the latest OS upgrade is anything other than an unalloyed boon to humanity.  And I would regard the 32-bit software vendors as running dogs of imperialism, or whatever the latest totalitarian insult is. 

But I have a life outside of the time I spend on my computer, and in that life I try to relate to things of permanence and eternal significance:  God, for instance, and my place in His universe.  And to God, as the psalmist tells us, even an entire human lifetime is like grass that springs up in the morning and dries out and dies by the same afternoon.  We may not like that idea, but if an entire lifetime dwindles into insignificance in the light of eternity, how can I take seriously the urging of even a corporation as large as Apple that their new operating system is the greatest thing to come along since—well, since we hope you can't remember back farther than last week, when we just made you give up your last 32-bit application. 

I'm not making a lot of sense here, perhaps, but I'm trying to express something about the culture of Apple, or at least the attitude it tries to encourage among its customers, that I find distasteful, unhelpful, and pernicious if carried outside the narrow field of software and applied to life in general.  It's basically the attitude that I must have the newest, latest, most advanced of everything in order not just to be happy, but to be able to function in society at all.  And because so many things we do now, from contacting friends to doing our jobs, depend on software products, Apple has the raw power to enforce that attitude at the pain of our being severely inconvenienced in various ways.  I don't expect the Apple secret police to show up at my door and haul me away if they find out I'm running 32-bit scanner software.  But just the other day, I had to let go of a Canon scanner that was still mechanically perfect because I discovered that there are no drivers available for it that are compatible with the operating system of the Mac I bought last spring.  What's the difference between having a scanner worth $100 quit working because of something Apple did, and paying a $100 fine to the cops?  Not a lot that I can tell.

Calmer heads will urge me to take the bitter with the sweet, and will remind me of all the good things I can do with computers and software that I wouldn't be able to do otherwise, and to take upgrade losses like this in stride.  Well, maybe they have a point.  But Apple in particular is running its 32-bit ban in a rather cultural-revolutionish way, and unless everybody decides to abandon Macs altogether in protest (which is about as likely as it was for 700 million Chinese to revolt in 1966), we will all have to knuckle under and give up our 32-bit applications.  All I can hope for is that my new machine keeps running a long time and I don't have to get the new OS for any reason.  And maybe those three software outfits will come out with 64-bit versions of their software, but I'm not holding my breath on that either.

Sources:  Last July 9, Computerworld carried the article  "What Apple's 32-bit app phase-out on Mojave means to you" at https://www.computerworld.com/article/3269007/apple-mac/what-apples-32-bit-app-phase-out-on-mojave-means-to-you.html.  And I also referred to an article at the PC Magazine website at https://www.pcmag.com/article/350934/32-bit-vs-64-bit-oses-whats-the-difference.
-->

Monday, November 12, 2018

Aquinas Looks at Bitcoins


On the face of it, it's hard to think of two more unrelated subjects than St. Thomas Aquinas (1225-1274), the greatest philosopher of the Middle Ages, and bitcoins, the original blockchain-enabled digital currency that has spawned a flock of imitations and variations.  But Aquinas set out some guidelines that can let us at least speculate (so to speak) about what he would say about bitcoins.

It's hard to imagine how different the economy of 1250 A. D. was from today's economy, but some things were the same.  There were merchants, traders, banks, and markets then as well as now.  But in Europe, everything was done under the direct or at least indirect supervision of the Church, and so anyone who tried something too innovative had to justify it on the basis of Christian doctrine. 

Aquinas viewed money much as his philosophical ancestor Aristotle did:  as a medium of exchange, a legitimate accounting method that allowed trade to take place without the inconvenience of bartering this good for that unrelated service.  To use a time-honored analogy, if a shoemaker needs bread and a baker needs his shoes repaired, a swap could conceivably be arranged without money.  But if the baker needs his shoe fixed at a time when the shoemaker doesn't happen to want any bread, the barter system breaks down.  Money solves this problem by keeping track of the values of goods and services and allowing trade to take place without the need for barter.  The baker takes money he's earned and pays the shoemaker for his services, even if the shoemaker doesn't like the kind of bread the baker makes.  And so on.

Even in the case where a person gains a speculative advantage, Aquinas said that there is nothing necessarily wrong or sinful in taking that advantage.  According to an article by Murray Rothbard, Aquinas gave the example of a trader who is the first to bring a wagonload of food to a famine-stricken area where food is very scarce, and consequently the price he can get is very high.  In essence, Aquinas asks the question, "Would it be wrong for the trader to charge the prevailing high price, even though other traders will probably follow him and lower the price?  Or should he charge the lower price that he knows will prevail after the other traders arrive?"  Aquinas says charging the higher initial price would not be a sin, though it would be more charitable to sell the food below the market price.  So in saying this, he implied that taking advantage of a favorable market position, as we might put it today, is not necessarily wrong.

The bitcoin variety of digital currency is a particularly pure form of speculation in which the thing of value is so abstract as to be nearly nonexistent.  A bitcoin itself is just a record of transactions that play out under certain rules that are publicly known, and can be created only with the expenditure of a certain amount of energy, time, and other resources.  Its value in terms of more familiar monetary measures such as the dollar depends entirely on what people perceive its value to be.  While that perception is not completely random, as the throw of a die is, it is so far beyond the control of most individuals that it might as well be random.  So in this aspect, speculating in bitcoins amounts to a kind of gambling, as many other forms of financial speculation do as well.  And Aquinas had something to say about gambling.

In one section of his magnum opus, the Summa Theologiae, Aquinas says that giving gambling winnings to charity would be wrong if the winnings were garnered from those who have "no power to alienate their property," such as children and the mentally disabled.  And if gambling is illegal according to civil law, it would also be wrong.  But he implies that gambling winnings from a game in which everyone knows and understands the rules, and gambles anyway, would at least not be sinful, although as with any other indulgence, excessive gambling can become harmful to oneself and others and becomes a sin against charity.  So to the extent that buying bitcoins amounts to gambling, Aquinas would also give them a conditional pass.

I will close with a personal bitcoin story that I hope the subject of it won't mind my sharing, if I don't give any names.  Some time ago I had a student in one of my classes come up to me and ask for advice.  It turns out she had learned about the bitcoin business in the very early days of its existence when you could get some for a few dollars each, and she'd gone ahead and bought some.  At the time she approached me about the matter, they were worth some very serious money—many thousands each, was my impression—and she wanted some advice as to what to do with them.  This was when their value was reaching a historic high.

I warned her about adverse tax implications, and told her I was no financial expert, but selling a few of them with awareness of her tax situation wouldn't be a bad thing.  I'm not sure what she did, but it's likely she was able to pay for her whole college education that way and then some.  I also encouraged her to give some of her profit away as a way of developing a habit of charity.

To be frank, I began this post hoping that I could find a blanket condemnation by Aquinas of bitcoins.  But I don't see that in what I was able to find, at least in this brief and superficial inquiry.  After all, we're used to our bank accounts being recorded in digital form, and bitcoins are just an elaboration of digital banking, in a sense, although with different rules.  So I'm pretty sure that if we had Aquinas with us today, and we could manage to translate the idea of digital currency into Latin, he would catch on and probably say that there's nothing intrinsically wrong with bitcoins.  But like any medium of exchange, bitcoins or dollars or bars of gold can either be used wrongly—e. g. stolen—or used to benefit people.  The problems with them, if any, will not be found in the software or the hardware, but will arise from the human heart.

Sources:  I referred to the website https://mises.org/library/philosopher-theologian-st-thomas-aquinas which carries an article dated 12/25/2009 "The Philosopher-Theologian:  St. Thomas Aquinas" by Murray Rothbard, and a Google-enabled excerpt from the Cambridge University Press edition of Aquinas' Summa Theologiae edited by R. J. Batten, O. P., vol. 34 (section 2a2a3, 23-33, art. 7.2), p. 263.

Monday, November 05, 2018

The Google Walkout: Not A Strike, But. . .


Last Friday, Nov. 2, some 20,000 worldwide employees of the Google division of Alphabet Inc. walked out at 11:10 AM local time, presumably taking a long weekend off.  The reason?  According to the seven core organizers who published an essay detailing their demands, the proximate cause was the news from the New York Times that Android developer Andy Rubin received a $90 million severance package after being credibly accused of sexual harrassment, and other top executives who were allegedly guilty of similar escapades have also been let go with generous golden parachutes.  The #MeToo movement has raised the visibility of such things to the extent that the Google organizers found enough grass-roots support among both male and female employees around the world to stage a successful job action that garnered headlines and brought a promise from Google CEO Sundar Pichai to meet with them the following week.

As Bloomberg editorial writer and Harvard law professor Noah Feldman pointed out, this is not your traditional labor-management strike, with demands from a homogeneous group of laborers for higher pay and shorter hours.  Feldman likens it more to a student protest at a university aimed at a cultural or value issue such as racism, sexism, or whatever the popular -ism of the day happens to be.  For example, students last March staged protests about gun control, a subject that their universities have relatively little say in. 

I partly buy Feldman's argument, but partly not.

He's right in that this is something new in the area of labor-management relations.  One novelty is the international scope of the protest, reaching literally around the world in countries with radically differing labor laws and cultural milieus of their own.  Such international reach is necessary for the global-capitalist world we live in if a labor action is to be effective against a multinational corporation.  Another novelty is the type of worker involved:  well-paid professionals (largely engineers), not low-level manual laborers and semi-skilled workers.  A third novelty is the subject matter of the protest:  It is no skin off most workers' personal noses that Andy Rubin got a $90 million severance package—such an amount is chump change to Google, and dividing it up among the protesters is not the point.  The point is that a certain type of wrongdoing—sexual harrassment, in this case—was tolerated by management, and Google didn't punish the wrongdoers, who were treated financially as well as any other upper management person leaving the company for a neutral reason.  So it's the company's culture and its moral implications that people are angry about.

But I part company with Feldman when he says a company can deal with these kinds of things with little if any economic cost, and can usually even support the demands themselves without much trouble.  Topping the list of demands the core organizers published was a call to end forced arbitration and the right to have a "representative of their choosing" with them to any meeting with Human Resources.  Unionized workers will recognize in this a move toward the standard right of having your union rep with you when you meet with management about a personnel issue, and to have disputes settled not in company-provided arbitrators' offices, but by a third party such as a law court or independent arbitrator.  The second demand was for an end to "pay and opportunity inequity," which sounds to me like it could cost quite a bit.  The devil is in the details of such inequity and how to fix it, but you can bet the walkout organizers won't be happy if male engineers get their pay cut in order to increase the pay of female engineers, for example. 

So in these and the other demands, one can perceive some of the same types of demands that classic unions such as the AFL-CIO imposed on automakers in the 1960s:  adjustment of pay inequities and improvement of working conditions.  The organizers of last week's walkout have carefully refrained from using the word "union" in any of their statements that I have seen, but if it talks like a union and acts like a union, they are effectively in the early stages of forming a union, but one that is very different from the classic national unions. 

Personally, I am no friend of unions in general, but a saying I came up with while I was employed at a university which had a faculty union may apply here:  "Unions are a sign of bad management."  In a well-run company that values both the labor and opinions of its employees, management will create a working environment that is positive enough so that any attempts at organizing the workers into a union will fail.  (Of course there are underhanded ways of suppressing union organizing, but I'm not talking about that.)  In most cases of unionized workers, at some point the management has been greedy and negligent enough that their employees decided to take united action to get justice from their employer, whether that amounts to an eight-hour day and basic safety regulations, as it did in the 1930s, or higher standards regarding sexual harrassment and its consequences, as it does in the 2010s at Google.  It looks to me like Google's management has crossed the line into union-creating territory, and now they're going to have to deal with the consequences of their actions, or inaction.

Feldman is right in that this situation may have initiated a new era in labor-management relations.  Historically, by and large engineers have not been unionized in the U. S., but there are examples of well-paid professions which are, and arguably to the benefit of the employees (think airline pilots, who have been unionized for decades).  So there is nothing intrinsically contradictory, illogical, or particularly immoral about a union for engineers.  Perhaps if engineers design a union, it will leave behind some of the potential for corruption, graft, and union-management wrongdoing that were endemic in some of the classical unions like the Teamsters.  Instead, maybe engineers will create lean, mean, and focused virtual labor organizations that form, achieve their intended purposes, and then go away until the next crisis comes along.  That's how this job action happened, and the organizers have perhaps found a new path forward for engineers to assert themselves collectively in the face of soaring executive compensation and job uncertainty.

Well, I better quit before I start sounding like a union organizer myself.  But the Google walkout is the first instance of a novel form of labor-management relations that may bear interesting fruit in the future.

Sources:  The demands of the core organizers of the Google walkout were carried by the website The Cut at https://www.thecut.com/2018/11/google-walkout-organizers-explain-demands.html.  The New York Times article about the sexual-harrassment issues of Andy Rubin and others can be found at https://www.nytimes.com/2018/10/25/technology/google-sexual-harassment-andy-rubin.html.  And Noah Feldman's commentary on the situation was carried by Bloomberg News at https://www.bloomberg.com/opinion/articles/2018-11-02/google-walkout-isn-t-a-traditional-union-workers-strike.