Monday, October 29, 2018

Open Loop in Lawrence, Massachusetts—Cause of the Columbia Gas Disaster


Every profession has its inside lingo, expressions that mean something only to practitioners.  In the disciplines of electrical and mechanical engineering, one such expression is to "go open loop."  The loop referred to is a feedback loop, a concept that control systems of all kinds use to regulate quantities such as speed, flow rate, and pressure.  A well-designed feedback loop works as a clever automatic hand on a control valve to maintain constant pressure in a gas supply line, for instance, in the face of variations in demand for gas or delivery pressure from the high-pressure gas main.  But in order to work, the feedback loop must be closed, or complete, all the way from the regulator that controls the pressure, say, through supply pipes to the sensor that tells the system what the pressure is, back to the regulator.  If the information flow is interrupted anywhere in the loop, the regulator usually goes to an extreme and jams, which can lead to dire consequences.  So if you ever hear an engineer talking about a person who lost control of his temper as "going open loop," that's where the expression comes from.

On Sept. 13 of this year, dozens of people were injured, one was killed, and over a hundred structures were damaged in and around Lawrence, Massachusetts, when a natural-gas supply line's pressure soared and turned pilot lights into blowtorches and stove gas burners into towering infernos.  The National Transportation Safety Board has released its preliminary report on the cause of the disaster, and it looks very much like it was a classic case of open loop.

That afternoon, work crews were in the final stages of performing a tie-in to connect a new set of plastic gas-distribution pipes to the system, and to decommission a set of old cast-iron pipes that dated back to the early 1900s.  While many modern gas distribution systems place individual regulators at each customer's location, the older systems such as the one in Lawrence used low-pressure gas (about 0.5 pounds per square inch gauge, or psig) in the distribution pipes.  To regulate this pressure, control-system sensors were placed on the pipes and fed their data back to regulators at the junctions between the high-pressure transmission pipes and the low-pressure distribution pipes.

The instructions to the work crews did not say anything about what to do with the sensor  signals coming from the old cast-iron pipes when the gas was switched over from the old system to the new plastic one.  So when the crews shut off the manual valve that fed the old pipes, the sensors in them were still connected to the regulator that was now feeding the new plastic pipes, which were connected to most of the homes in the affected area. 

It's easy to see that cutting off the valve broke the feedback loop.  The control system saw pressure falling in the old pipes, so it opened up the regulator.  But the higher pressure wasn't being sensed, because the crews hadn't been told to switch the sensor signals at the same time they switched the pipes. 

Pressures are monitored constantly on the system, but only at Columbia Gas headquarters back in Columbus, Ohio.  Overpressure alarms went off shortly after 4 PM at the monitoring center, but the dispatchers there had no way of shutting off the system.  All they could do was to try getting in touch with the technicians in Lawrence to tell them to shut off the gas.  Even though that was done about a minute after the alarms went off, it wasn't until 4:30 that the regulator supplying the overpressure gas was shut off.  By then most of the damage had been done. 

An article in the Boston Globe last week describes the anguish of Lawrence residents who are still waiting for Columbia Gas to restore gas service.  Many homes in the area are unlivable without gas heat, and as cold weather intensifies the situation is getting intolerable.  The company's self-imposed deadline of mid-November for having all customers back on line recently slipped to Dec. 16, and there are many complaints about poor communication regarding repair-work schedules and shoddy workmanship once the crews arrive.  Columbia Gas is going to be paying for this mistake for a very long time, as are the affected residents of Lawrence.

The behavior of Columbia Gas approaches a poster-child status of how not to run a utility.  You can bet that when the gas company was locally owned and operated, the monitoring center in Massachusetts probably also had the ability to shut off the gas.  But when consolidator Columbia Gas purchased the system, the move of the monitoring operation to Ohio was probably done to save money, money which was not spent on a corresponding control system to allow Ohio monitors to shut off regulators and valves remotely. 

Work-crew training and planning also came up short in this disaster.  It's sometimes hard to say exactly how much knowledge technicians and their immediate supervisors should have about the technology they are working on.  There's no need for every lowly tech to know enough to design the entire system, for example. That's what engineers are for.   But on the other hand, you would hope that the person in charge of the work crew would know enough to realize that if the sensor signals were not swapped at the same time as the pipe systems they were connected to, there could be trouble.  Presumably the company has done this sort of thing before successfully, so it may be a case of an isolated incompetent worker rather than a systemic issue.  But clearly, more explicit instructions and better training are needed.

Finally, Columbia Gas hasn't exactly covered themselves with glory in the public-relations department.  Gas utilities are by nature local monopolies, and it is easy for them to act like what they are, namely the only gas company in town.  Unfortunately, in misleading their customers as to when gas would be restored, they have created a lot of ill will that more careful and cautious scheduling could have avoided. 

Let's hope for Lawrence's sake that it's a mild winter in Massachusetts, and that everybody gets their gas lines working again before it's too much colder.  And for all you other gas utilities out there:  don't act like Columbia Gas.

Sources:  I referred to the article "Merrimack Valley residents voice frustration with recovery effort" which appeared on the Boston Globe website on Oct. 28, 2018 at https://www.bostonglobe.com/metro/2018/10/27/residents-voice-frustration-with-recovery-effort/dY3LvbGLsqqWj3feaeLS3O/story.html.  The preliminary NTSB report on the incident is available at https://www.ntsb.gov/investigations/AccidentReports/Pages/PLD18MR003-preliminary-report.aspx.  I blogged on this disaster on Sept. 17, 2018.

Monday, October 22, 2018

The Soyuz Failure and Emergency Return


Proponents of manned space flight have looked forward to the day when space travel will be as routine as getting on a bus in Toledo to go to Chicago.  You don’t normally see national headlines when a bus breaks down, but when the Russian Soyuz rocket taking an American and a Russian astronaut to the International Space Station (ISS) last week suddenly failed and the astronauts had to be rescued by an automatic emergency system, it made the New York Times and other media outlets around the world.  So we aren’t quite there yet, and it will be some time before we can even get anyone else up into orbit.

First, the failure.  On Thursday, Oct. 11, astronauts Aleksei Ovchinin and Nick Hague took off from the Baikonur Cosmodrome on a Russian Soyuz rocket.  The first stage consists of four detachable engines on four sides of the central second-stage rocket.  Two minutes into the launch, the four first-state engines normally eject themselves and the second stage lights up to continue the flight into the orbital range of the ISS.  But according to the website www.extremetech.com which quoted Russian head of human spaceflight Sergei Krikalev, experts suspect that something went wrong with one of the first-stage engine ejection systems.  One of them may have incompletely separated and collided with the second stage.  At any rate, enough damage was detected that an automatic abort system went into action, separating the capsule containing the two astronauts from the second stage and sending it in an earthbound trajectory that produced up to 6 Gs on the pair, twice what they experience during launch.  A parachute automatically deployed, and instead of spending six months in space in the ISS, Hague and Ovchinin ended their flight only a few minutes after launch in the desert-like steppes of Kazakhstan, where they were rescued by ground crews without further incident.

We can thank the Russian engineers who designed and installed the safety systems 35 years ago in what is now a pretty old but well-understood and reliable rocket system.  Russia has had its share of space-related accidents, dating all the way back to 1967 when a parachute on the Soyuz 1 flight failed to open, causing the death of astronaut Vladimir Komarov when his capsule crashed to the ground too fast.  Simplicity and reliability seem to be watchwords with the Russian space program, and while the two astronauts are no doubt disappointed that their flight was cancelled, so to speak, they are at least alive and well to talk about it.

The current residents of the ISS are in no immediate danger, as unmanned resupply rockets are still operational and they have an emergency escape capsule at hand should something require them to leave for Earth in a hurry.  But the failure of the only current means of human access to the ISS has put a big hold on the station’s plans, and on manned spaceflight in general.

In possibly a year or so, both Boeing and SpaceX hope to begin manned flights of their own, ushering in a new era of spacecraft built by private firms dedicated to the purpose, rather than the cumbersome government public-private partnerships that have up to now been responsible for all manned space travel.  But those companies’ rockets are not ready, and if recent history is any guide, they may not be ready for several years yet. 

Unlike the race to the moon, which was basically the Cold War between the old USSR and the U. S. carried out by peaceful means, private space firms are in competition only with other firms.  So it would be better to err on the side of prudence rather than rush into manned spaceflight only to have your latest creation blow up and kill people.  The rewards awaiting companies that make it first into space with humans are by no means certain, other than the glory of the thing.  But glory is hard to take to the bank. 

Tourism is one way to make money with space, but no one believes that will support the whole enterprise by itself.  And tourists have an obstinate prejudice against running a high risk of ending up as space junk or worse, so the safety record of space flight will have to be significantly improved before anything like mass tourism can come about. 

Regarding re-crewing the space station, currently the Soyuz is the only show in town, and if the accident investigation isn’t wrapped up satisfactorily by the end of 2018, we might see a situation in which the ISS crew comes home and the empty station is piloted remotely until Russian engineers are confident in launching more astronauts with the Soyuz.  Published schedules for the ISS through 2020 do not include any plans for using rockets other than Soyuz to transport astronauts to the station, so SpaceX’s manned flights will presumably be orbital demonstrations independent of the space station itself.

If manned space flight were completely routine, it wouldn’t attract the attention and excitement it does.  In my own field of engineering, some of the best students have ambitions to get involved in private space enterprises, and they go with my blessing.  But the period we are in now may be compared to where the aviation industry was in around 1920.  Flying was still regarded as an exotic and dangerous sport, and it was not yet clear how anyone would make any serious money at it. 

We can hope that the cause of the Soyuz failure can be identified and fixed soon enough that we don’t have to depopulate the ISS, and that transportation to the station can go back to its desirable no-headlines mode.  But we can also expect that the upcoming launches of private manned space rockets will get tons of publicity, even if they’re successful.  Because taking a rocket into space is still nowhere near as routine as taking a bus to Chicago.

Sources:  I referred to news reports on the Soyuz accident carried by the New York Times on Oct. 11, 2018 at https://www.nytimes.com/2018/10/11/science/soyuz-rocket.html, and also the sources https://www.space.com/42155-soyuz-abort-astronaut-nick-hague-first-interviews.html and https://www.extremetech.com/extreme/278883-russia-blames-soyuz-launch-failure-on-booster-collision.  I also referred to Wikipedia articles on a list of spaceflight-related accidents and incidents, the Baikonur Cosmodrome, and the International Space Station. 

Monday, October 15, 2018

Microcosm of Technological Culture: A Foundry Closes in San Jose


Every now and then, something happens that epitomizes an era.  When the first railroad to cross the North American continent was completed with a gold spike, that single event symbolized not only a success for the railroad industry, but opened a new chapter in American history.  So many meanings are packed into today’s subject that I won’t have space to explore them all, but I’ll try.

In 1919, a metals foundry called the Kearney Pattern Works began operations in what was then the small town of San Jose, California.  Back then, the state was largely agricultural, and the castings the foundry made were used by farm-product manufacturers, canneries, and the water and power utility industries.  During and after World War II, Kearney no doubt participated in the huge defense-plant buildup that transformed a sleepy agricultural economy into one of the nation’s economic powerhouses.  Corporations such as IBM and Hewlett-Packard (now HP) became clients.  In my own career as an RF and microwave engineer, I became familiar with some of HP’s products that used heavy aluminum castings for electrical stability, and it is entirely possible that those castings were poured in the shops of Kearney Pattern Works.

Metal casting is an ancient art mentioned in the Bible.  My university is one of the few in the U. S. that has an active foundry-education program, complete with a small working foundry, where once or twice a semester you can see the soul-stirring pouring of nearly white-hot glowing iron into molds.  For many people who grew up in the middle or latter part of the twentieth century, foundries symbolized the essence of industry.

We still have foundries, but increasingly, at least in my branch of engineering, the word means silicon foundry—a place where silicon chips are fabricated.  And even those are mostly offshore now.  After a century of operation, the Kearney Pattern Works is shutting down and the land will be sold to Google, which is planning a 245-acre complex employing 20,000 people in downtown San Jose.  I know nothing about the details of Google’s plans for their complex, but I’d be willing to bet any reasonable amount of money that if you walk in and observe what most of those people will be doing once the facility is up and running, they will be in clean, well-lit, air-conditioned offices sitting at computer monitors. 

Is that a bad thing?  The city of San Jose doesn’t think so.  Jim Wagner, 71, is the principle owner of the foundry and grandson of the founder Al Kearney.  He says the city has been pressuring him and other heavy-industry firms to leave the downtown area, but the expenses of moving would have been prohibitive.  So the alternative is simply to close the doors and sell the property to Google, which is probably one of the few private entities in the world that can afford to buy more than 200 acres of prime real estate at the southern end of Silicon Valley.

Foundry work is hot, dirty, and dangerous.  But foundry workers didn’t need a college education, or even much high school learning, at least at the lower levels of the firm.  During the Great Migration of blacks from the rural south to the industrial north, many found work in foundries and other muscle-intensive industries, which often paid well and allowed even uneducated people to afford decent housing and living standards for their families.  The hollowing out of these industries over the last four or five decades has contributed to the deterioration of many Northern cities and the inner-city areas of many other parts of the country as well.

If this were Cuba, the foundry would still be operating, because the government wouldn’t let it fail.  Socialism tends to freeze industries at a given moment and make them independent of actual economic conditions in the rest of the world.  But the bad result of this is that state-controlled industries tend to make stuff that nobody wants, and can’t make stuff that people do want.  The free-enterprise approach of letting innovation, success, and failure happen more or less as the market demands seems to keep companies on their toes to change with the technological and social environments they must operate in. 

It was Austrian economist Joseph Schumpeter (1883-1950) who came up with the phrase “creative destruction” to characterize the way technological innovation makes whole industries obsolete when new ones come along.  If everyone just accepts this process as a price of a free economy, progress continues.  But inevitably, companies that decide to do only one kind of thing end up taking the risk that some day, no one will want that kind of thing anymore.  And something like this has happened to Kearney.

When the timing of a firm’s demise coincides with the end of one’s career, as it has in Jim Wagner’s case, creative destruction isn’t so bad.  But the reporter who wrote the story didn’t mention any younger employees who will have to find work elsewhere.  Maybe there weren’t a lot of younger workers—even at its peak, Kearney employed only about 35 people, and many of those left years ago.

There are many contrasts between what Kearney has done at their location for the last century and what Google plans to do there for the next century, but another contrast is size.  Kearney was a small, privately-owned firm.  Google is—well, Google:  a nearly ubiquitous but oddly anonymous presence in the lives of people all around the world, whose doings are often opaque, secretive, and hugely influential.  In a foundry, what you saw was what you got:  the molds, the sand used in the molds, the hot metal, the smoke, the finished product.  What Google is doing at this moment, how they make their billions, and what goes on inside their shadowy corporate universe is known largely only to Google employees.

Modern industrial societies have accepted disruptive technological changes as the cost of enjoying the benefits of those same changes.  And while almost nobody will miss the smoke or mess or dirt of the Kearney foundry, it’s possible that some of its employees will wish it was still in business.  And maybe some of their children and grandchildren will get jobs at Google.  But they will probably have to spend a good part of their lives in school first, and even then, they might not make the grade.

Sources:  The article “Foundry’s departure ahead of downtown San Jose Google village project ends century of work” by George Avalos appeared on Oct. 12 on the website of the San Jose Mercury News at https://www.mercurynews.com/2018/10/12/foundrys-departure-ahead-of-downtown-san-jose-google-village-project-marks-end-of-era/.

Monday, October 08, 2018

Seeing May Not Be Believing: AI Deepfakes and Trust in Media


The 1994 movie “Forrest Gump” featured authentic-looking newsreel footage of the 1960s in which President Kennedy allegedly appeared with Tom Hanks’ fictional Gump character.  Trick photography is as old as cinema, and the only remarkable thing about such scenes were the technical care with which they were produced.  At the time, these effects were state-of-the-art and took substantial resources of a major studio.

What only Hollywood could do back in the 1990s is soon coming to the average hacker everywhere, thanks to advanced artificial-intelligence (AI) deep-learning algorithms that have recently made it possible to create extremely realistic-looking audio and video clips that are, basically, lies.  An article in October’s Scientific American describes both the progress that AI experts have made in reducing the amount of labor and expertise needed to create fake recordings, and the implications that wider availability of such technology poses for the already eroded public trust in media.  Fakes made with advanced deep-learning AI (called “deepfakes”) can be so good that even people who personally know the subject in question—President Obama, in one example—couldn’t tell it was fake.

The critical issue posed in the article is “. . . what will happen if a deepfake with significant social or political implications goes viral”?  Such fakes could be especially harmful if released just before a major election.  It takes time and expertise to determine whether a video or audio record has been faked, and as technology progresses, that difficulty will only increase.  By the time a faked video that influences an election has been revealed as a fake, the election could be over.  We faced something similar to this in 2016, as it has been conclusively shown that Russian-based hackers spread disinformation of many kinds during the runup to the Presidential election. 
Some voters will believe anything they see, especially if it fits in with their prejudices.  But the firmer a voter is embedded in one camp or the other, the less likely they are to change their vote based on a single fake video.  The people who can actually change the outcome of an election are those who are undecided going into the final stretch of the campaign.  If they are taken in by a fake video, then real harm has been done to the process.

On the other hand, the public as a collective body is not always as stupid as experts naively think.  If a deepfake ever manages to be widely believed at a critical moment, and the fakery is later revealed publicly, the more thoughtful among us will will keep in mind the possibility of fakery whenever we watch a video or listen to audio in the future.  This can be likened to an immune-system response.  The first invasion of a new pathogen into one’s body may do considerable damage, but a healthy immune system creates antibodies that fight off both the current infection and also any future attempts at invasion by the same pathogen.

If deepfakes begin to affect the public conversation significantly, we will all get used to the fact that any audio or video, no matter how genuine-looking, could be the concoction of some hacker’s imagination. 

Low-tech versions of this sort of thing happen all the time, but with lower stakes.  When I’m not writing this blog, I find time to do some lightning research, and a few years ago someone forwarded me a YouTube clip purporting to be a security-camera video of a guy who got struck by lightning, not once, but twice, and survived both times.  I watched the grainy monochrome recording of a man walking toward the camera on a sidewalk.  Suddenly there was a bright full-screen flash, and he’s down on the pavement, apparently dead.  Then he raises his head, shakes himself, and groggily rises to his feet, only to have a second flash knock him down again.  I heard from another lightning expert about this video that it was definitely fake.  Some people want so desperately to achieve viral fame, that they will go to the trouble of setting up an elaborate fraud like this one just on the hopes that their production will be kooky enough to get shared widely.  And in this case, they succeeded.

Speaking theologically for a change, some (including myself) trace the origin of lies back to the father of lies himself, the devil, and attribute lying to the only Christian doctrine for which there is abundant empirical evidence:  original sin.  No amount of high-tech defense is going to stop some people from lying, and if they can bend deep-learning AI to nefarious purposes such as creating inflammatory deepfake videos, they will.  The best defense from such scurrilous behavior is not necessarily just working harder to make fake-video-detection technology better, although that is a good thing.  It is to bear in mind that people will lie sometimes, and to use the time-honored rules of evidence to seek the truth in any situation.  And to bear in mind something that is often forgotten these days, that there is such a thing as objective truth. 

I think a more serious problem than deepfake videos is the fact that in pursuit of the online dollar, social media companies have trained millions of their customers to react to online information with their lizard brains, going for the thing that is most titillating and most conforming to one’s existing prejudices regardless of the likelihood that it’s true.  They have created an invisible mob eager and willing to be set off like a dry forest at the touch of a match.  And once the forest fire is going, it doesn’t matter if the match was real or fake. 

Sources:  Brooke Borel’s article “Clicks, Lies, and Videotape” appeared on pp. 38-43 of the October 2018 issue of Scientific American.

Monday, October 01, 2018

Implementing an Internet Bill of Rights


The U. S. Constitution’s first ten amendments make up what is called the Bill of Rights.  These guarantee freedom of religion, of the press, the right to a speedy and public trial, and other rights that were not explicitly mentioned in the Constitution itself.  As I was sitting in on a telecommunications class taught by industry expert Andres Carvallo last week, he speculated about something that I’m going to call an Internet bill of rights.  It doesn’t exist yet, and there are good technical and economic reasons to suspect it never will, but it’s a great idea and deserves airing.

Right now, your rights online are a hodge-podge of hundreds or maybe thousands of legal boilerplate agreements that you’ve checked that you agree to, probably by lying that you have read and understood them.  As I mentioned recently, this is a nasty little piece of hypocrisy that nevertheless is forced on anyone who deals with computers or the Internet.  But legally, your rights online are nothing more than the sum total of all the incomprehensible technical gobbledegook of those fine-print agreements, plus any applicable laws of the municipality, county, state, or country you happen to live in.  

And that’s just the stuff you are entitled to know about.  Internet companies sometimes do things with your data that they don’t admit to in their agreements, and the only way we find out about such things is through news reports of data breaches and underhanded dealings.  Such is the fragmented state of online rights today.

Carvallo’s vision is this:  you, the individual user, get to say exactly what your online privacy and other rights are.  If you don’t want anybody sending you ads for stuff you view online, you can say so. If you don’t want companies accumulating data about your eight-year-old daughter when she uses a toy that’s connected to the Internet, you can say so.  And if you’re on social media platforms, you don’t have to figure out each one’s arcane permissions structure individually.  You just state your preferences once for all in a centralized location, and everybody you deal with has to follow the rules you set up.  

Now this notion may not be original to Carvallo.  But it’s the first time I’ve heard of such a concept, and it’s very appealing.  It would also be very hard to implement, but as he spoke in the context of teaching an engineering class, he encouraged the students to think of future technical possibilities, of which this was one.

You could push this even farther, to the extent of being able to say what ads you do and don’t see.  Google has been making a feeble effort in this direction for some time, in that when I close an ad that’s popped up in the middle of some article I’m reading, I will sometimes get a Google Ads option to say whether I never want to see that ad again and why.  But this is only a tiny step in the direction of the comprehensive vision of personal control that an Internet bill of rights would involve.

Of course, the reason most of the Internet is free is because there are ads. And having the privilege of saying no to some or all ads would mean that for you, anyway, the companies would have to find some other way to make money.  And they’re not about to do that, not if the present system works for them. 

Pay-for-viewing websites are a step in this direction.  In my limited experience, they seem mainly to be operated by newspapers and other old-school media who are striving to maintain some vestige of the old subscription model that worked for so many decades for physical newspapers and magazines.  So something like this can work, but only within the context of a single organization. Fixing things so no matter what you looked at, you’d never see ads anywhere on the Internet is presently almost unimaginable, although I suppose somebody could come up with some kind of shell or filter gizmo that might do that.

Which brings us to the technical question of how an Internet bill of rights could be implemented.  My answer is, I have no idea.  But anything that has to work with any website you go to, would have to be built into the very structure of the Internet, and that means global standards and protocols. When strictly technical problems come up, such as running out of IP addresses or something like that, the world’s engineers have figured out a pretty efficient and effective way of forming working groups, hashing out a technical solution, and agreeing on a standard that implements it.  

But this only works for technical matters.  Things that threaten to affect an industry’s bottom line drastically are not suitable for the technical standards-setting mechanism. And an Internet bill of rights such as we’re discussing would be viewed as a threat by most online for-profit entities. 

In that case, we’d have to get into the political, social, and economic aspects of the problem.  And you’re not going to solve those kinds of matters with merely a working group of engineers.  Something like the United Nations or its International Telecommunications Union might have to be involved, but again, they primarily handle technical matters. Because of the international nature of the Internet, an effective implementation of an Internet bill of rights would have to be agreed on worldwide.  And getting the world to agree on something even as simple as what time it is, can be a hard thing to do, let alone a matter affecting the online activities of everybody on earth.

Well, we’ve traveled from one classroom in San Marcos, Texas to the whole Internet in one column.  I don’t think we’re any closer to having an Internet bill of rights than we were when we started. But it’s a nice idea, and I thank Andres Carvallo for bringing it up.  And if you’re optimistic, maybe you think that this won’t be the last time you read about it.