Monday, April 25, 2016

The Pemex Vinyl Chloride Plant Explosion


Unless you work in the petrochemical industry, you have probably never been near the substance called vinyl chloride.  It is a chlorinated hydrocarbon that is made when one of the four hydrogen atoms in the compound called ethylene is replaced by a chlorine atom.  On the other hand, unless you live in a house whose plumbing is all more than forty or so years old, you probably use products made with vinyl chloride every day.  Polyvinylchloride (PVC) pipes are used in the plumbing of nearly all new residential and business construction, and about 40 million metric tons (units of 1,000 kg) of PVC plastic were made in 2013.  But all PVC pipes were once the toxic, flammable liquid called vinyl chloride, and that is what may have got loose at the Pemex chlorinate 3 plant in the Gulf Coast city of Coatzacoalcos, Mexico last Wednesday, Apr. 20.  The resulting explosion and fire killed at least 28 people and injured over a hundred, with more still missing as of today.

Besides the immediate human tragedy, this accident raises important questions about the safety record of the state-owned petroleum company Pemex.

At this writing, little is known about the cause of the blast.  Coatzacoalcos is a town at the very southernmost tip of the Gulf of Mexico, in the Mexican state of Veracruz between central Mexico and the Yucatan Peninsula.  It is one of the main export terminals for Mexican oil and is a logical location for a vinyl-chloride plant, since its manufacture requires large quantities of the petrochemical ethylene.  The chlorinate 3 plant is a joint venture between Pemex and a PVC-pipe manufacturer called Mexichem. 

As with many petrochemicals, vinyl chloride is hazardous in several ways.  If released into the air, it evaporates into a dense vapor and can catch fire if a source of ignition such as an automobile engine is nearby.  Worse yet, the products of combustion are themselves hazardous:  hydrogen chloride (which when dissolved in water makes hydrochloric acid), and phosgene, which was used as a poison gas in World War I.  Besides the danger of explosion and fire, vinyl chloride is extremely toxic, and causes liver damage in animals at concentrations in air as low as 500 parts per million.  Higher concentrations cause acute illness and even death.  Because of these hazards, vinyl chloride is usually stored in double-walled containers under pressure, with leak monitors that detect low levels of leakage from the inner container before the outer wall is breached. 

It may take months before we can learn exactly what happened at Coatzacoalcos, but it is obvious that a large amount of something flammable got loose.  Some reports mention a strong odor of ammonia, which could be from refrigeration machinery used in process cooling operations in the plant.  Whether or not vinyl chloride itself was released, the high death toll says several things about this accident.

First, one can ask why there were so many people in a hazardous area.  The trend in modern petrochemical operations is to reduce staffing to the point that in emergencies or during strikes, an entire plant can be operated safely from one central control room.  Although this is speculation, it is possible that Pemex, being owned by the Mexican government, has adopted a different policy and relies more on hands-on operators in its plants as a way of increasing government-paid employment.  Whatever the reason, Pemex's safety record is not good.  News reports of this accident relate that in 2012, 26 people were killed in a natural-gas facility owned by Pemex and in 2013, an explosion in Pemex's Mexico City facilities killed 37 people. 

Next, what kind of safety culture does Pemex have?  To run a complex petrochemical plant without accidents is a monumental task, and many safety priorities are expensive, in the sense that they take resources which otherwise could be used to enlarge the firm's bottom line.  With the recent crash in oil prices, there are reports that Pemex is cutting expenses, and this latest accident raises the question of whether safety has been sacrificed to budget considerations.

Finally, there is Pemex's status as a state-owned enterprise.  I am not familiar with Mexican law, but it is quite possible that it is either statutorily or practically difficult to sue Pemex.  Also, Pemex may be self-insured rather than purchasing hazard insurance on the open market.  Both of these factors, if true, remove two of the greatest incentives private firms have to run their operations safely:  fear of lawsuits from injured parties and financial pressure from private insurers to run a safe and low-claims operation.  Without such incentives, Pemex management has only its own integrity to rely on for worker safety, and the demands for sustaining profits in the face of falling oil prices may have overwhelmed safety concerns. 

I hope that the investigative bodies in Mexico have all the competence and authority they need, not only to get to the bottom of this tragedy, but to publicize its causes and assign responsibility wherever it needs to be assigned.  Again, the status of Pemex as a state-owned firm may lead to conflicts of interest between state officials who want to make workplaces safer, and other officials who do not want to see a state-owned enterprise called to account.  The loser in such a conflict will be the workers who have the choice of being paid to put their lives on the line in a hazardous workplace, or to go somewhere else and earn even less than the $12,000 US annual salary that was the average in 2005 for Mexican chemical engineers. 

If reports surface in English as to the cause of this accident, it will be interesting to learn whether poor safety practices contributed to it.  In the meantime, my sympathy goes to all of those who lost loved ones or were injured.  And I hope this latest incident leads to a re-evaluation of the entire safety culture of Pemex, which looks like it could use a lot of work.

Sources:  I referred to a Reuters report on the accident at http://www.reuters.com/article/us-mexico-pemex-idUSKCN0XH2N2, an ABC News report at http://abcnews.go.com/International/wireStory/death-toll-28-mexico-petrochemical-plant-explosion-38614458, a Fox News item at http://www.foxnews.com/world/2016/04/21/blast-at-mexico-petrochemical-plant-kills-3-injures-more-than-100.html, and a CNN report at http://www.cnn.com/2016/04/23/americas/mexico-pemex-petrochemical-blast/.  I also referred to statistics on PVC production at http://www.plasticstoday.com/study-global-pvc-demand-grow-32-annually-through-2021/196257501821043, a salary survey for Mexico at http://www.worldsalaries.org/mexico.shtml, and the Wikipedia articles on vinyl chloride and ethylene. 

Monday, April 18, 2016

Should We Mind Minecraft?


If you've been around teenagers at all in the last few years, or if you are one yourself, you've probably run across someone who plays Minecraft, the computer game invented in Sweden in 2009.  I first encountered it a few years ago when we were visiting my 13-year-old nephew in Kansas.  I sat behind him in his father's car and watched over his shoulder as he constructed some kind of structure with what to me looked like amazing speed and skill.  He showed me some of the elaborate buildings he'd made with it and explained how he played the game with friends who could send wild animal-like creatures his way.  It all sounded rather weird, but at the same time I was fascinated by the basic premise of the game:  unless you build it, it isn't there.

In this week's New York Times Magazine, Clive Thompson, author of the book Smarter Than You Think:  How Technology is Changing Our Minds For the Better, describes the origin, popularity, and multifaceted nature of Minecraft.  It appeals to both sexes and a wide range of ages, and in contrast to many slash-and-burn single-shooter-type games, parental attitudes toward it range mostly from the neutral to the favorable. 

Some people even say that playing Minecraft teaches kids useful skills, ranging from programming and logic design to three-dimensional visualization and the ability to deal with computer-aided design programs.  I suppose some education-psychology wonks will sooner or later divide a group of kids into Minecraft players and non-Minecraft players, and do a bunch of tests on them to see whether any of this is true.  Whatever the results are, I'm willing to go with the idea that Minecraft appeals to the creative part of one's personality, rather than the destructive part.  Although there can be plenty of destruction in Minecraft too—I've seen my nephew wipe out whole virtual city blocks and start over when things didn't go the way he wanted.

All the same, there's something about Minecraft that reminds me of an analogous trend from my own teenage years:  the golden age of electronics tinkering in the 1960s.  Transistors had just begun to replace the bulky, inefficient, and sometimes dangerous vacuum tubes, and for a few dollars spent at Radio Shack you could purchase hours of pleasant fiddling with amplifiers, oscillators, and logic circuits.  And I did. 

Thompson points out that one feature of Minecraft—"redstone"—acts basically like electric current, and you can build switches, relays, and highly complex logic circuits, all without ever having cracked a book on Boolean algebra.  He cites the case of Natalie, a fifth-grade girl, who he observes as she busily debugs her logic circuit when it fails to do exactly what she wants. 

This is good in some ways and not good in other ways, as I can explain from personal experience.

The childhood and teenage brain is never as plastic later as it is then.  Things you learn when you're 16 or younger are going to stay with you in a powerful way the rest of your life.  Depending on what you learn and how you learn it, this can be an unalloyed asset, a mixed asset and liability, or a liability.  With me, tinkering with electronics when I was young has turned out to have mixed results, although the balance sheet turned out to be positive.

Yes, I taught myself to do some pretty impressive things, like building a taped-program robot that could pick up things off the carpet of my room.  I also learned to use old junk as my supply depot instead of earning money to buy new stuff.  And as a kind of lone wolf of the electronics world, I grew up with no connection between what I was interested in and what the rest of the world happened to want.  As a result, my professional career in electronics had a firm technical foundation.  But I have also been plagued by what I recognize now is a bad habit of scrimping and making do with old junk around the lab, rather than asking for project money up front to do the job properly with state-of-the-art equipment.  And I have always had trouble making my own interests conform to what anybody else is interested in, which makes for problems when you try to get outside funding.

Yes, kids who devise what amounts to combinatorial logic circuits when they are ten years old will probably be able to do that pretty well in college, too.  "So that's what it's called!" they may say in their first digital-logic class, and go on to become brilliant computer scientists and designers.  On the other hand, when you reinvent the wheel on your own, you're not likely to approach the subject in a way that subsequent experience has shown to be the most efficient fashion.  People who teach themselves coding often write what college-trained programmers call "spaghetti code"—so tangled and needlessly complicated that nobody else can figure out what's going on, not even the person who wrote it, at least after a while.  So while learning system administration and coding and logic design when you're ten can be cool, you can also acquire some deeply ingrained habits that may turn out to be liabilities in the long run.

Alexander Woollcott, a radio personality of the 1940s, told the story of how the comedian Harpo Marx, after he became famous for his self-taught Broadway performances on the harp with his brothers' comedy team, decided one day he could finally afford harp lessons.  So Harpo found a professional harpist willing to teach him at ten dollars a half hour.  As Woollcott put it, ". . . the Maestro, having heard him play, swore there would be no way of his unlearning all the shockingly wrong things he knew about the harp."  Then the Maestro got Harpo to show him how Harpo did some things with the harp that the Maestro thought were not possible.  At the end of the half hour, Harpo paid his ten bucks, but as he'd been doing all the teaching, he never went back.

Not everybody who plays Minecraft is going to wind up as the Harpo of their techie generation.  And some of them may learn habits that will cause future teachers some distress, as the Maestro felt when he watched Harpo play.  But it's nice that at least one computer game out there invites you to get under the hood of the often opaque computer systems we live with so much and actually make something you can understand more or less completely, because you built it.  And if it breaks, you can try to fix it instead of just cussing the anonymous developers who should know better than to ship defective software. 

The inventor of Minecraft, Markus Persson, sold it for $2.4 billion to Microsoft in 2014 and washed his hands of the whole business after discovering that fielding thousands of inquiries from the millions of Minecraft fans wore him out.  But the thing he invented lives on, and I hope its career in the future will be as benign and instructional as it has been so far. 

Sources:  The article "The Minecraft Generation" by Clive Thompson appeared in the online New York Times Magazine on Apr. 17, 2016 at http://www.nytimes.com/2016/04/17/magazine/the-minecraft-generation.html.  The story (possibly apocryphal) of Harpo's harp lessons appeared in the March 1926 issue of Vanity Fair magazine, the text of which is accessible at http://www.vanityfair.com/news/1926/03/harpo-marx-theater-music.  And at last report, my nephew was running a YouTube channel with a microphone we bought him for Christmas, giving advice to other Minecraft players online.

Monday, April 11, 2016

Will Robots Ever Have Moral Authority?


Robots build cars, clean carpets, and answer phones, but would you trust one to decide how you should be treated in a rest home or a hospital?  That's one of the questions raised recently by a thoughtful article in the online business news journal Quartz.  Journalist Olivia Goldhill interviewed ethicists and computer scientists who are thinking about and working on plans to enable computers and robots to make moral decisions.  To some people, this smacks of robots taking over the world.  Before you get out the torches and pitchforks, however, let me summarize what the researchers are trying to do.

Some of the projects are nothing more than a type of expert system, a decision-making aid that has already found wide usefulness in professions such as medicine, engineering, and law.  For example, the subject of international law can be mind-numbingly complicated.  Researchers at the Georgia Institute of Technology are trying to develop machines that will ensure compliance with international law by programming in all the relevant codes (in the law sense) so that the coding (in the computer-science sense) will lead to decisions or outcomes that automatically comply with the pertinent statutes.  This amounts to a sort of robotic legal assistant with flawless recall, but one that doesn't make final decisions on its own.  That would be left to a human lawyer, presumably.

Things are a little different with a project that a philosopher Susan Anderson and her computer-scientist husband Michael Anderson are working on:  a program that advises healthcare workers caring for elderly patients.  Instead of programming in explicit moral rules, they teach the machine by example.  The researchers take a few problem cases and let the machine know what they would do, and after that the machine can deal with similar problems.  So far it's all a hypothetical academic exercise, but in Japan, where one out of every five residents is over 65, robotic eldercare is a booming business.  It's just a matter of time until someone installs a moral-decision program like the one the Andersons are developing in a robot that may be left on its own with an old geezer, such as the writer of this blog, for example.

What the Quartz article didn't address directly is the question of moral authority.  And here is where we can find some matters for genuine concern.

Many of the researchers working on aspects of robot morality evinced frustration that human morality is not, and may never be, reducible to the kind of algorithms that computers can execute.  Everybody who has thought about the question realizes that morality isn't as simple and straightforward as playing tick-tack-toe.  Even the most respected human moral reasoners will often disagree about the best decision in a given ethical situation.  But this isn't the fundamental problem in implementing moral reasoning in robots.

Even if we could come up with robots who could write brilliant Supreme Court decisions, there would be a basic problem with putting black robes on a robot and seating it on the bench.  As most people will still agree, there is a fundamental difference in kind between humans and robots.  To avoid getting into deep philosophical waters at this point, I will simply say that it's a question of authority.  Authority, in the sense I'm using it, can only vest in human beings.  So while robots and computers might be excellent moral advisers to humans, by the nature of the case it must be humans who will always have moral authority and who make moral decisions. 

If someone installs a moral-reasoning robot in a rest home and lets it loose with the patients, you might claim that the robot has authority in the situation.  But if you start thinking like a civil trial lawyer and ask who is ultimately responsible for the actions of the robot, you will realize that if anything goes seriously wrong, the cops aren't going to haul the robot off to jail.  No, they will come after the robot's operators and owners and programmers—the human beings, in other words, who installed the robot as their tool, but who are still morally responsible for its actions. 

People can try to abdicate moral responsibility to machines, but that doesn't make them any less responsible.  For example, take the practice of using computerized credit-rating systems in making consumer loans.  My father was a loan officer at a bank in the 1960s before such credit-rating systems came into widespread use.  He used references, such bank records as he had access to, and his own gut feelings about a potential customer to decide whether to make a loan.  Today, most loan officers have to take a customer's computer-generated numerical credit rating into account, and the job of making a loan is sometimes basically a complicated algorithm that could almost be executed by a computer. 

But automation did not stop the banking industry from running over a cliff during the housing crash of 2007.  Nobody blamed computers alone for that debacle—it was the people who believed in their computer forecasts and complex computerized financial instruments who led the charge, and who bear the responsibility.  The point is that computers and their outputs are only tools.  Turning one's entire decision-making process over to a machine does not mean that the machine has moral authority.  It means that you and the machine's makers now share whatever moral authority remains in the situation, which may not be much.

I say not much may remain of moral authority, because moral authority can be destroyed.  When Adolf Hitler came to power, he supplanted the established German judicial system of courts with special "political courts" that were empowered to countermand verdicts of the regular judges.  While the political courts had power up to and including issuing death sentences, history has shown that they had little or no moral authority, because they were corrupt accessories to Hitler's debauched regime.

As Anglican priest Victor Austin shows in his book Up With Authority, authority inheres only in persons.  While we may speak colloquially about the authority of the law or the authority of a book, it is a live lawyer or expert who actually makes moral decisions where moral authority is called for.  Patrick Lin, one of the ethics authorities cited in the Quartz article, realizes this and says that robot ethics is really just an exercise in looking at our own ethical attitudes in the mirror of robotics, so to speak.  And in saying this, he shows that the dream of relieving ourselves of ethical responsibility by handing over difficult ethical decisions to robots is just that—a dream. 

Sources:  The Quartz article "Can We Trust Robots To Make Moral Decisions?" by Olivia Goldhill appeared on Apr. 3, 2016 at http://qz.com/653575/can-we-trust-robots-to-make-moral-decisions/.  (I thank my wife for pointing it out to me.)  The statistic about the number of aged people in Japan is from http://www.techinsider.io/japan-developing-carebots-for-elderly-care-2015-11, and my information about Hitler's political courts appears on the website of the Holocaust Memorial Museum at https://www.ushmm.org/wlc/en/article.php?ModuleId=10005467.  Victor Lee Austin's Up With Authority was published in 2010 by T&T Clark International.

Monday, April 04, 2016

Learning from the Kolkata Overpass Collapse


On Thursday Mar. 31, around noon, the busy Rabindra Sarani-KK Tagore Street crossing in the city of Kolkata, India (population 4.5 million) was crowded with shoppers and people having lunch in open-air eateries.  Crowds that a Westerner would consider to be a mob scene are routine in the Indian subcontinent, and the density of street-level shops makes many thoroughfares almost impassible by automobile.  To alleviate this congestion, in 2008 the Hyderabad-based construction conglomerate IVRCL won a bid to construct an overpass that would carry vehicular traffic above the existing street.  Construction began in 2009 and was due for completion in 2012.  But the firm ran into financial and land-acquisition difficulties, with consequent project delays, and so last week one of the last parts of the projected 2+ kilometer-long overpass was still under construction above the street.

By Wednesday, Mar. 30, a long straight section of the overpass was complete, and concrete was poured that night for a section next to a turn at the crossing, where steel girders already were suspended above the road.  At about 12:25 PM Thursday, some 300 feet (100 meters) of the overpass collapsed onto the street below.  As of Apr. 2, the death toll stood at 27, but more were missing and over 100 people were injured. 

While the cause of the collapse is under investigation, the IVRCL firm has been charged with culpable homicide and three members of the firm have been arrested.  This is after one firm representative termed the collapse "an act of God."

The construction phase of any large civil-engineering project is fraught with hazards that only good planning and expert supervision at all times can avoid.  As a civil-engineering professor interviewed about the tragedy pointed out, right after a poured-concrete structure is set in place, the weight of the newly poured material must be supported by temporary scaffolding before the concrete sets.  In contrast to the finished product, which office-based engineers can design at their leisure to withstand known stresses, temporary scaffolding is erected onsite in an ad-hoc way, and may have hidden defects that would require more engineering knowledge to avoid than the onsite construction workers and supervisors have.  It was apparently one such defect that led to the disaster in Kolkata last week.

From videos shot during the collapse, it appears that few if any pedestrian or vehicle barriers were in place to keep people away from the construction site.  Admittedly, this would have been difficult, like temporarily shutting down Times Square in New York City for construction.  And businesses on the street undoubtedly would have complained if large sections of the surface street had been blocked off, impairing access to some shops.  But events have proved that the tradeoff would have been worth it, if excluding traffic from under the most hazardous parts of the overpass during construction would have saved lives.

While some commenters on Indian news sites complained that such things are never allowed to happen in the so-called First World, only a year ago I reported in this space about a similar but smaller-scale accident involving overpass construction, right here in Texas.  While a prefabricated-concrete-beam overpass was being built over the busy I-35 freeway near Salado, Texas, a truck carrying an overheight load struck one of the beams before it had been firmly fixed in place.  It shifted and knocked down several other beams, one of which killed the driver of a pickup truck.  Again, this accident could have been prevented by diverting traffic from underneath the overpass, but the result would have been permanent miles-long backups on I-35 that might have provoked angry citizens to mount a protest march at the Texas Department of Transportation. 

Any complex engineering project is a series of compromises with safety, expenses, schedules, personnel, and other resources all in the mix.  In the West, a relative abundance of resources has led engineering organizations to err on the side of more money traded for more safety.  In India, as the comparatively poor track record of fatal building and construction collapses attests, getting the project done cheaply sometimes takes priority over getting it done safely.  India is a democracy, and it may be that the current level of construction safety reflects an increased urgency to solve the nation's civil-engineering needs faster and with fewer resources than Western-style engineering would allow.  It is bad enough when a privately-owned building collapses.  But a public-works project such as an overpass inherently affects more people, and carries more potential for harm.  This is why most public-works project specifications require licensed professional engineers to supervise the design phase.  But the best designers in the world will be unable to prevent onsite accidents if the people who actually do the construction are not capable of understanding the hazards and engineering challenges involved.

At least three members of the IVCRF firm have been arrested in connection with the tragedy and charged with culpable homicide.  The degree to which they are responsible is now going to be determined by the legal process, which may take months or years.  Regardless of the fate of the engineers and managers involved in this accident, to prevent future tragedies like this a sea change will have to take place in the entire construction industry in India. 

I have mentioned before a simple safety code that was once emblazoned on bronze plaques in Bell System telephone exchanges throughout the U. S.:  "No job is so important and no service is so urgent—that we cannot take time to perform our work safely."  That was back when the Bell System was a monolithic nation-like organization, and it could afford hundreds of bronze safety plaques.  But everyone working in a business that creates potential hazards for its own employees, and especially for innocent bystanders, can afford to make the Bell System safety creed their own.  And something like this could go a long way toward making Indian construction sites and buildings safer places to be.

"A Bridge Too Close," about the I-35 accident appeared on Mar. 29, 2015.