Monday, August 27, 2018

The Morani Bridge Collapse: Style Over Substance?

Riccardo Morani (1902-1989) was an Italian civil engineer and bridge designer who was one of the earliest proponents of designs that used mainly prestressed concrete, rather than mostly steel.  In 1967, a bridge he designed was put into service in Genoa, Italy.  It spanned a river, some railroad tracks, and other portions of the city with three tall pylons, each of which had concrete stays reaching diagonally down to the roadway, which was suspended some 145 feet (44 m) above the ground.  It came to be known as the Morani Bridge, after its designer.

On Tuesday, August 14, during an intense rainstorm one of the tower-supported sections of the bridge suddenly collapsed.  As of today (Aug. 26), a total of 43 people have died as a result of the accident, not to mention injuries and property damage, which will total in the millions.  Government officials have called for the revocation of the contract with Autostrade per l’Italia, the private firm that handles highway maintenance in Italy.  One mourner at the state funeral held for many of the victims said that “In Italy, we prefer ribbon-cuttings to maintenance.” 

Engineering experts consulted by the media all said it was too soon to draw any conclusions about what might have caused the bridge to fall.  Bridges designed by Morani have a history of requiring more maintenance than more common designs do.  The stark elegance that may have appealed to clients around the world who were looking for something distinctive to add to a city skyline was achieved at a cost of asking a lot of the material that was used in the bridges Morani designed.  As we have mentioned before, pure concrete has almost no strength in tension, so to use it as a structural material, it has to be reinforced with steel “rebars” and other components that can withstand pulling stresses.  This would be especially true of the stays that slanted down from the tops of the towers to support the roadbed.  Over time, corrosion can attack these tension members, sometimes invisibly deep within a vital member of the structure. 

The evidence of why the bridge collapsed is buried inthe huge piles of rubble that workers will need to clear meticulously and carefully, and because such work is both a huge project on its own and demanding of attention to detail, it may be months or even years before we have an answer to the question of why the bridge collapsed.  After a bridge in Minneapolis collapsed in August of 2007, it took over a year for the U. S. National Transportation Safety Board to issue its final report on the accident, which attributed the collapse to a design flaw that made a gusset plate too weak. 

The problem with forensic investigation of prestressed-concrete bridges is that concrete is a much more complex material than steel.  Unlike steel, which is fabricated under carefully controlled conditions in a steel mill, concrete is often formed onsite, and the way it is mixed, poured, and treated after pouring can influence its ultimate strength and other properties.  Nevertheless, most prestressed-concrete bridges withstand the stresses they were designed for, and so the reasons for the Morani collapse will be interesting to discover, if they can be found.

While we still do not know whether the collapse was due to an initial design flaw or faulty maintenance, the question of maintenance for bridges and other vital pieces of infrastructure is an urgent one that industrialized nations all around the world are struggling with.  In 2017, the American Society of Civil Engineers (ASCE) gave the U. S. a D+ in its “infrastructure report card,” saying that 56,000 bridges (about 9% of the total) were “structurally deficient” in 2016.  While the situation has not reached such a crisis that we see bridges falling down every month, tragedies like the Morani collapse remind us that the price of deferred maintenance is sometimes much higher than anyone would like to pay. 

It’s a little bit like preparing for war.  The only way you know you didn’t spend enough money on preparing for a war is if you lose it.  You can win with barely enough resources, or with three times more resources than you need, and the result is the same.  The art and science of maintenance consists in doing enough to prevent nearly all major tragedies and to do something about minor problems fast enough, while not simply wasting resources on painting a wall that doesn’t need painting, for example. 

Judging by the rarity of bridge collapses, most bridges were either built well enough to start with to survive many decades with whatever maintenance they’ve received, or have been maintained well enough to keep standing.  But the shock value of a major bridge collapse is one of the main motivators for public funding of infrastructure maintenance, which has none of the appeal of new construction. 

Engineers are mostly used to working out of the limelight, doing dull but necessary things like scheduling expensive maintenance that takes money away from more flashy and popular government activities.  Riccardo Morani was somewhat an exception to this rule, attaching his name to striking bridge designs that caught the eye of the public time after time.  If there’s the equivalent of an Internet connection wherever he is, I’m sure he’s sorry to see what has happened to his creation in Genoa, whether the failure is due to him personally or due to insufficient maintenance over the five decades the bridge has carried traffic since it opened.  But maintenance is a job for the living, not the dead, and engineers in charge of maintenance owe it to their constituent publics to be sure that tragedies such as the Morani bridge collapse don’t happen.  We look forward to finding out what went wrong in Genoa a couple of weeks ago, and applying those lessons to future problems so that they can be avoided before more people get killed.

Sources:  I referred to news reports on the accident carried by Time’s website at and The Guardian at  I also referred to Wikipedia articles on Riccardo Morani and the I-35W Mississippi bridge, and the ASCE report card at

Monday, August 20, 2018

Some Answers About the Panhandle Cornfield Meet of 2016

A “cornfield meet” in railroad parlance is a head-on collision between two locomotive engines.  Needless to say, such occurrences are avoided if at all possible.  But on the morning of June 28, 2016, two freight trains collided head-on in the Texas Panhandle, killing three people and causing an estimated $16 million in damage.  At the time I blogged about it, the only information available was news reports.  A few weeks later, the National Transportation Safety Board (NTSB) issued a preliminary report on the accident.  While the NTSB has not made public any additional data on the accident since then, the preliminary report makes clear that human error was likely at fault.
The BNSF line through the town of Panhandle is a single-track line, and two-way traffic is managed with a series of sidings.  The dispatchers, probably in the Fort Worth regional train control center, planned to switch the westbound train to a siding near the town, where it would remain while the eastbound train passed by on the main line.  If the eastbound train arrived in the area of the siding too soon, before the westbound train had time to move completely from the main line to the siding, two signals were set along the main line west of the eastern switch, where the westbound train was going to leave the main line for the siding.  The first signal the eastbound train encountered was solid yellow, which means for the engineer seeing the signal to slow the train to a maximum of 40 MPH and be prepared to stop at the next signal.  The second signal was set to red, which forbids the engineer from moving any part of the train past the red signal. 

So the plan was for the eastbound train to slow down at the yellow signal and stop at the red signal, while the westbound train arrived at the eastern switch and eventually cleared the main line by running onto the siding.

What happened instead was this.  Before the dispatchers had a chance to change the eastern switch from the main line to the siding, the eastbound train passed the yellow signal on the main line going at 62 MPH and the red signal at 65 MPH, heading through the switch on the main line straight for the westbound train.  When the engineer on the westbound train saw what was happening, he managed to jump from the cab.  But his conductor died in the resulting crash, as well as the engineer and conductor on the eastbound train.  The NTSB report somewhat ruefully notes that positive train control (PTC) was scheduled to be installed on this section of track later in 2016, although planned PTC installations have suffered repeated delays in the past.

PTC is a semi-automated system that promises to reduce the chances for human error in train operations.  A PTC system would have figured out that the two trains were heading toward a collision and would have at least slowed them down, if not preventing the accident entirely.  As it stands, the physical evidence points responsibility for the accident toward the crew of the eastbound train, as they failed to respond to the clearly visible yellow and red signals in time. 

We may never know what distracted them, but people make mistakes from time to time.  And some mistakes exact a fearful penalty. 

While even one death due to preventable causes is a tragedy, some context to this accident is provided by a slim volume I have on my shelves:  Confessions of a Railroad Signalman, by James O. Fagan, copyright 1908.  It was written at a time when railroad-related fatalities (passengers and railroad employees combined) were running at about 5,000 a year, a much higher rate per train-mile than today.  Fagan’s concern was that railroad employees of his day had to deal with on-the-job pressures that encouraged them to take risks and shortcuts that flouted the rules, and that the management system was ill-equipped to discipline misbehaving employees. 

While much has changed in railroading since 1908, any system that relies on a human being’s alertness can still fail if the person’s attention flags.  And that seems to be what happened outside Panhandle, Texas on that summer morning in 2016. 

If and when PTC is installed on most stretches of U. S. railways, the hope is that fatal and costly accidents will decline to even lower levels than what we see today.  The limiting factor after that will be mechanical malfunctions, perhaps, or dispatching errors at a high enough level to overrule the PTC system.  In any case, we can expect rail travel and shipping to be even safer than it is now, which compared to 1908 is pretty safe already.

Machines and systems are deceptively solid-looking.  It doesn’t seem possible that thousands of tons of steel rolling stock and rails can change very fast.  But the way it’s used can change, and PTC promises to do that.  Eventually, I suppose that the nation’s entire rail system will be run by computers and will resemble nothing so much as a giant version of a tabletop model train, running smoothly and without collisions or hazards.  Of course, automobile drivers will still manage to stop on grade crossings and people will walk on train trestles, so those types of accidents can’t be prevented even by PTC.  To eliminate those types of accidents, we’d have to tear up the whole system and rebuild it the way the English built their rail systems from the start:  fenced-off railroad property, virtually no grade crossings (tunnels and bridges instead), and other means to keep people and trains permanently separated. 

But I suspect we as a society are not that exercised to eliminate the last possible railroad fatality from the country.  So instead, we will enjoy whatever benefits PTC brings along and hope that we personally can stay out of the way of the trains. 

And modern-day cornfield meets will at last join their ancestors as a historic footnote, a quaint disaster that simply can’t happen anymore.  Like soldiers dying on the last day of a war, the crew members who died in the 2016 accident may be among the last to depart in that singularly violent way.  But for those of us who remain, and whose continued survival depends on our being alert, whether behind the throttle of a locomotive or the wheel of a car, this story is a good reminder to keep awake and pay attention.

Sources:  The NTSB report on the June 28, 2016 Panhandle, Texas accident can be found in the agency’s listing of railroad incident reports at  For those with a certain type of morbid curiosity, there is a collection of silent movies of three or four intentionally-staged cornfield meets between steam locomotives that can be viewed on YouTube at  Confessions of a Railroad Signalman was published by Houghton-Mifflin. 

Monday, August 13, 2018

Exporting Enslavement: China’s Illiberal Artificial Intelligence

In 1989, I had the privilege of visiting Tiananmen Square in Beijing only a few months after the famous June Fourth protests that the Chinese govermnent violently suppressed.  Our tour guide showed us black marks on the pavement that were left by fires during the conflict—a memory that has not faded.

Much has changed since then.  China is now a world leader in many areas of science and technology, including artificial intelligence (AI).  But the nature of the Chinese government has not changed, and as Ryan Khurana points out in a recent online article in National Review, its illiberal policies may transform AI into a weapon that similar governments around the world can use to enslave their citizens. 

To avoid confusion, I should define a couple of political terms.  In the sense I intend here, the term liberal refers to what political scientists call “classical liberalism.”  Simply put, a liberal government in this sense encourages the liberty of its citizens.  It acknowledges  fundamental rights such as the right to life, the rights to worship freely and express one’s views without fear of government reprisal, and the right to participate meaningfully in political affairs.  The intention of the founders of the United States of America was to form a liberal government in this sense.

By contrast, illiberal governments are top-down operations in which those in charge have essentially unlimited power over the mass of citizens.  Most monarchies were set up this way in theory, and from its founding the Peoples’ Republic of China has behaved in a consistently illiberal way, and continues to do so under President Xi Jinping.  Anything that assists the Chinese government in spying on its citizens, learning about their private as well as public actions, and controlling their behavior so that they conform to a model pleasing to the government is going to get a lot of support.  And AI fits this bill perfectly, which is one reason why China is not only pouring billions into AI R&D, but exporting it to other countries that want to spy on their people too.

Khurana points out that Zimbabwe, an African country well-known for its human-rights abuses, has received advanced Chinese AI technology from a startup company in exchange for letting the firm have access to the country’s facial-recognition database.  So China is helping the government of Zimbabwe to keep tabs on its citizens as well.  Maybe Zimbabwe will come up with something like China’s recently announced Social Credit system, which is a sort of personal reliability rating that eventually every person in China will receive.  Think credit rating, only one based on the government’s electronic dossier of your behavior:  what stores you visit, what friends you have, what meetings you go to, what TV shows you watch, and whether you go to church. 

Khurana says that we are engaged in a kind of arms race reminiscent of the old Cold War conflict between the Soviet Union and its satellites, and what used to be called the Free World.  Back then, the game was to see whether the U. S. or the U. S. S. R. could dangle the most attractive technological baubles in front of this or that country to tempt it toward one side or the other.  It wasn’t only military technology, but weaponry was the trump card. 

Things are different now, and AI is not like a howitzer—you can do lots of things with it, both peaceful and warlike.  Or liberal and illiberal.  But unless a smaller country has already developed a capable AI technological base of its own, it is likely to want only turn-key systems already designed to do particular things.  And companies in China who have learned how to help the government use AI to monitor and control people will naturally find it easiest to help other governments do the same illiberal thing.

Khurana says the U. S. side is currently losing this battle.  The federal government and military have been slow to get up to speed on using AI for defensive purposes.  When the Department of Defense tried to engage Google in a cooperative AI project to identify terrorists, the company pulled out, and other attempts to use AI in the military have been stymied because critical pieces of intellectual property turn out to be linked to Russian or Chinese ownership. 

There are two aspects to this problem.  The international aspect is that around the world, Chinese AI is coming with illiberal strings attached, and governments with little interest in protecting their citizens’ freedom are eagerly following China’s lead in using AI to spy on and suppress their peoples.  The domestic aspect is that the U. S. is going perhaps too far in the direction of pretending that we are all one big happy AI family, and that we can get AI technology from anywhere in the world and use it for our own private, liberal, or defensive purposes. 

But the world is not that way.  Back when wars depended mainly on hardware, nations contemplating future conflicts made sure they stockpiled essential materials such as tungsten and vanadium before starting a war.  Now that international conflicts increasingly involve cyberwarfare and AI-powered technology, it is foolish and shortsighted to ignore the fact that China is flooding the globe with their AI products and services, and to pretend we don’t have to worry about it and will always be able to outsmart them.  Physical weapons have a way of being used, and the same is true of AI designed for illiberal purposes.  Let’s hope that freedom doesn’t get trampled underfoot in the rush of many countries to get on the Chinese AI bandwagon.

Sources:  Ryan Khurana’s article “The Rise of Illiberal Artificial Intelligence” appeared on the website of National Review on Aug. 10, 2018 at

Monday, August 06, 2018

In Professionals We Trust—Or Do We?

In a recent New York Times opinion piece, science journalist Melinda Wenner Moyer bemoans the fact that vaccine researchers are getting paranoid about publishing scientific papers that contain anything negative about vaccines, out of fear that the anti-vaccine movement will weaponize such results.  This problem has important implications for public trust in professionals generally, including engineers.

First, a little background.  Life before vaccines was shorter and riskier.  Smallpox, diphtheria, tetanus, and the lesser but still potentially fatal childhood diseases of measles and mumps killed millions and left survivors scarred for life or otherwise disabled.  That is why the world’s most advanced thinkers, including the New England minister and Princeton president Jonathan Edwards, embraced the idea of vaccinations against smallpox when Edward Jenner popularized it in the 1700s.  Unfortunately, when Edwards was vaccinated during an outbreak of the disease in 1758, it led to full-blown smallpox that killed him. 

Vaccination methods were crude back then, and over the following decades, the smallpox vaccine was refined to the point that in 1980, the World Health Organization declared that smallpox had been eradicated.  But Edwards’ death is a reminder that progress isn’t uniform, and bad news as well as good news has to be shared among professional practitioners if progress in any technology is to be made.

Up to about the year 2000, the attitude of the public in most industrialized nations toward vaccines was almost uniformly positive, and not controversial.  Each new and more effective vaccine, such as the Salk and Sabin vaccines against polio in the 1950s, was hailed as one more example of science’s triumph over disease.  Then in 1998, a gastroenterologist named Andrew Wakefield published the results of a small study based on 12 cases that seemed to indicate a link between autism and the very small amount of mercury used as a preservative in the mumps-measles-rubella (MMR) vaccine that was routinely given to millions of small children every year. 

Wakefield’s paper was published in the respected medical journal Lancet, and created a huge controversy.  Parents of autistic children now had something to blame on which to blame the mysterious syndrome, and as time went on, activist groups of parents formed and made Wakefield a hero.  The nascent Internet became a powerful tool in the hands of these groups, as it bypassed the usual peer-review process that scientists must adhere to and enabled isolated parents of autistic children to band together.  The failure of any subsequent scientific studies to confirm Wakefield’s findings didn’t slow down the anti-vaccine movement significantly.

It wasn’t until 2004 that serious questions were raised about Wakefield’s integrity.  It turned out that he was being paid by attorneys who wanted to sue vaccine manufacturers, and after further investigation revealed that Wakefield had fabricated some data, Lancet withdrew the paper and Wakefield had his British medical license revoked.  But the horse had left the barn long before that.  Currently, many well-educated and otherwise rational people refuse to have their children vaccinated for what are generally termed “philosophical reasons.”  As epidemiologists know, there is a threshold for the percent of unvaccinated people in a given population above which the risk of epidemics increases rapidly, and widespread refusal to vaccinate is partly blamed for recent outbreaks such as the 147 cases of measles centered at Disneyland in California in 2015. 

This story of the anti-vaccination trend is perhaps one of the clearest examples of what is a relatively new thing in Western civilization: widespread distrust of expert authority.  Back when everyone knew someone who had died of smallpox and many survivors bore scars, the promise of being able to immunize yourself and your offspring against such a terrible disease was so attractive that intelligent people such as Jonathan Edwards took the risks of what was by modern standards a very dangerous vaccination. 

Today, when the chances of anything bad happening from a vaccination well known and down in the fifth decimal place (a few per 100,000), and the ill effects of not getting vaccinated are also well known and clearly worse than taking the vaccine, why would anybody refuse, especially on behalf of their innocent children?  Clearly, because they believe in something or someone other than the conventional scientific wisdom represented by institutions such as the medical profession, government and private research organizations, and even people as supposedly trustworthy as their own family doctor.   

The problem with all this is that some professionals really do know more about a subject than non-professionals, and when experts talk about their own fields, they are generally more worth listening to than some random website you find with Google.  The paranoia among vaccine researchers that Moyer discusses is a sad result of ignoring this basic fact of life. 

It’s like a child who is repeatedly accused falsely of stealing from the cookie jar.  If he’s punished often enough for something he didn’t do, he may go ahead and steal anyway, figuring he’s going to get blamed for it whether or not he’s done it, so he might as well enjoy the ill-gotten gains of stealing, because the negative consequences will be the same.

In embracing bogus and disproved theories of harm from vaccines, anti-vaccine groups appear to be creating the very behavior they suspected was already happening among scientists:  namely, a reluctance to report negative aspects of vaccine use.  Of course, this will cripple any efforts to improve vaccines, because you have to know what went wrong before you can fix it. 

Let’s hope that engineers keep their collective noses clean in this regard.  Few polls of trust in the professions even ask the public about engineers.  I had to dig for a while before I came up with a global poll from 2015 that lumped engineers in with technicians, and that combined group came in on the trust scale about in the middle, just below pilots and just above soldiers.  Firefighters were the most trusted profession, and bankers the least.

Things could be worse, certainly.  In this fishbowl Internet age when anybody who says anything eye-catching, whether true or not, is liable to become world-famous overnight, engineers need to be especially careful in their public pronouncements.  It’s good to let the public know your considered expert opinion about something.  But first, be sure you’re right.  Lying about a matter of expert opinion that’s of vital interest can create harmful effects that go on for decades, as the anti-vaccine movement has shown.

Sources:  Melinda Wenner Moyer’s opinion piece entitled “Anti-Vaccine Activists Have Taken Vaccine Science Hostage” appeared on the New York Times website on Aug. 4, 2018 at  I referred to the website for information about the Wakefield controversy, to about the Disneyland measles outbreak, and to for the global survey of trust in various professions.  Jonathan Edwards’ death as the result of a smallpox vaccination is well known and reported in numerous sources.