Monday, August 10, 2020

Sad Lessons from Beirut Ammonium Nitrate Explosion


On Tuesday, August 4, a warehouse in the crowded downtown port area of Beirut, Lebanon caught fire.  Lebanon has been going through hard times lately:  COVID-19, hyperinflation, and general government dysfunction.  Ordinarily,  a warehouse fire would not be cause for concern.  But this fire was different, because 2,750 tons of explosive ammonium nitrate was stored in the warehouse as a result of a combination of business misjudgment, bureaucratic incompetence, and negligence.


There are YouTube videos that show what happened next.  In one mobile-phone clip shot from a few miles away, an orange-colored smoke cloud towers above the downtown area.  Suddenly a black ball with cracks of yellow balloons outward at unbelievable speed, followed by a larger whitish spray of water that covers many city blocks.  And then the shockwave hits and the phone is apparenly knocked out of the witness's hand.


By the latest counting available at this writing (Sunday), at least 157 people were killed, over 5,000 were injured, and up to 300,000 people have been rendered homeless by the blast, which leveled almost all the buildings in the immediate vicinity and broke windows for a radius of many kilometers. 


How did that much ammonium nitrate end up in the middle of the capital city of Lebanon?  By a series of mishaps and oversights that, taken individually, were fairly minor.  But the end effect was disastrous.


A BBC article has straightened out the tangled tale.  In September of 2013, the Moldovan-flagged cargo ship MV Rhosus set sail from Batumi, a city on the eastern coast of the Black Sea, to deliver ammonium nitrate to an explosives factory in Mozambique.  After traveling through the Bosporous Strait and docking in Greece for about a month, the ship headed across the Mediterranean for the Suez Canal, which would take it to the eastern coast of Africa.


But somewhere in the Mediterranean, something went wrong.  One source said that the ship had "technical problems," but the Russian captain, a Mr. Prokoshev, said that the ship's owner had a cash shortfall, so in order to get enough money to pay for the Suez Canal passage fee, he ordered the captain to pick up a load of heavy machinery in Beirut.  In any event, the MV Rhosus ended up docking unexpectedly in Beirut in November of 2013.  According to the captain, the machinery was too heavy to load, and the owner didn't have enough money to pay the Beirut port fees and a fine.  This is why the Lebanese port authorities impounded the ship and its cargo.


To make matters worse, when presented with this situation, the owners of the ship and its cargo abandoned it to creditors and the port authorities, leaving them with 2,750 tons of dangerous ammonium nitrate and a crew that began running low on food and supplies.


Beirut has an official called the Judge of Urgent Matters, and when the crew petitioned this entity for permission to leave the ship and go back to their various homes, the judge eventually relented.  In early 2014, port authorities transferred the bagged ammonium nitrate from the abandoned ship to a warehouse near a grain elevator in the port to await "auctioning and/or proper disposal." 


No actual crime had been committed, but by a series of messups, the Port of Beirut became the unwilling owner of a warehouse full of ammonium nitrate.


Lower-level officials seemed to know the tremendous hazard that this stuff presented, and the BBC discovered messages from customs officials pleading with the Judge of Urgent Matters to do something about the confiscated explosives.  It appears they tried to get action at least six times in the next three years.  One can question the appropriateness of the judge's title by the fact that the stuff sat there all the way from 2014 until last Tuesday.  Reportedly, the Public Works MInister Michel Najjar was talking about the stuff with the port manager as recently as late July, but again, nothing was done.


Up until the 1970s, Lebanon was one of the more competently run countries in the Middle East.  But things have deteriorated since then, and the explosion last week was the end product of a bureaucratic failure of historic proportions.


As the ship's captain commented to an interviewer, the best thing that could have happened is if the port had paid the ship's owner to take the ship and its cargo away as soon as they could.  It would have cost maybe $200,000, but the port authorities would have been spared having to deal with the hot potato of that much ammonium nitrate.


That didn't happen.  What the situation needed was a person with both the authority and the courage to get rid of the ammonium nitrate, which could have been sold or even donated as fertilizer, which is its other common use besides that of an explosive.  But that would have gone against what may be an all-too-common tendency in some government organizations, which is to deal with the apparently urgent over the truly important but not apparently urgent matters at hand.


One important function of governments is to act as a kind of social immune system, defending the body politic from potential and actual threats that are constantly attacking it.  Beirut's political immune system has been weakened by strife, the war in Syria, economic dislocations, and other factors to the point that a very basic "immune response" of getting what amounted to a time bomb out of the city center failed.  We hope that the citizens of Beirut and Lebanon will now stand a better chance of getting what they've been asking for for years:  more competence in government.  But governments everywhere, even the U. S., can learn from this tragedy that incompetence can have a heavy price.


Sources:  The BBC report on how the ammonium nitrate got to Beiruit is at  I also referred to a report by the Indian Express at and Wikipedia articles on Beirut and Lebanon.

Monday, August 03, 2020

Accused Twitter Hackers Arrested

A couple of weeks ago, I blogged about a Twitter hack that made numerous celebrities appear to be offering $2,000 to anyone foolish enough to send them $1,000 in Bitcoin first.  I quoted a lawyer who said that authorities were pretty good about tracing Bitcoin transactions, despite that currency's reputation for enabling anonymous transactions, and that chances were good for an early solution to the case.

Turns out he was apparently right.  On Friday, July 31, the state attorney's office in Tampa, Florida arrested Graham Ivan Clark, a 17-year-old, and will prosecute him as an adult, as Florida laws allow in such cases.  Authorities in California, where Twitter is based, announced that two others, Mason Sheppard of England and Nima Fazeli of Orlando, Florida, are being charged in the case as well.  Fazeli is 22 and Sheppard is 19.

There are now a few more details about how the hack was done.  Somehow the alleged criminals obtained phone numbers for several Twitter employees.  In a technique called "spear phishing," they then tricked someone into calling what probably sounded like a legitimate helpdesk, where the caller persuaded the employee to give them credentials that allowed them into Twitter's critical control systems via targeted spear-phishing attacks on other employees.

One can imagine this playing out rapidly in a movie:  the scene switches back and forth between a teenager's cluttered bedroom in Tampa to the cool, sophisticated environment of a Silicon Valley megacorporation where the kid hoodwinks staffer after staffer, and at last he types something on his laptop and yells, "We're in!"  But Mr. Clark may not have gotten his ideas from a movie.  Just being a teenager may have been enough.

Brain researchers have found that the teenage brain is an odd mixture of sophistication and poorly-controlled impulses.  In a Time article by Alexandra Sifferlin, we read that the brains of teenagers are about as big as they're going to get, but not nearly as interconnected as those of people in their late 20s and older.  In particular, the prefrontal cortex, where planning and forethought occur, is not yet well connected to the limbic system, which deals with emotions and goes through a growth spurt beginning by age 12.  So all the pieces of the adult brain are there, but they aren't connected as well as they will be in an adult. 

Add to this fact that certain kinds of mental activity turn out to be easy for clever teenagers and even children, while other kinds of mentally challenging work isn't.  For example, the world has known of many child prodigies in math (Blaise Pascal was writing proofs on the wall with a piece of coal by age 11) and music (Mozart).  But there haven't been any child-prodigy novelists or statesmen.  I'm not saying Clark is another Pascal, not by a long shot.  But programming and its illegal subset of criminal hacking are activities that smart young people can easily master on their own without undergoing a long apprenticeship.

So couple that native ability with the poor impulse control of a teen brain, and you get situations like the one Graham Clark is in.  Yes, he did a clever thing that got him a lot of publicity and some money.  But now he's facing criminal charges (a laundry list of 30 felonies) that could put him in jail for much of his natural lifespan.

In this case, anyway, crime didn't pay.  But how about Twitter, and how apparently easy it was for the three hacketeers to spoof and spear-phish their way into one of the most prominent Silicon Valley social media companies?

This kind of thing is an IT security specialist's nightmare.  Despite all the encryption, coding precautions, and other software and hardware security you can throw around, any organization of any size relies on interactions among people who trust each other.  And unless all the people work in one room and know each other's names and behaviors (an increasingly rare situation in these COVID-19 times), there is always a chance that a properly-informed hacker could impersonate someone in the organization to steal credentials or other critical data. 

It's hard to think of a way to prevent this kind of thing absolutely, but I bet Twitter is reviewing its IT security rules right now to prevent another such attack.  This is a lesson that engineers, and really anybody involved in dealing with confidential information, can benefit from.  For some of us, it might not be anything more important than a credit-card number, though having your credit card hacked is no picnic (it's happened to me several times). 

For organizations such as Twitter that have extremely valuable credentials to protect, it's hard to say what policies would prevent hacks like the one masterminded by Clark.  Whatever they might be, they would have to partake of a kind of rigidity that goes against the Silicon Valley grain.

For example:  I once heard of a restaurant whose management held so highly the safety and well-being of their customers, that if any of the people who laid out the silverware on the table was caught touching a fork anywhere above the handle so as to get their fingers on something that would later go into a customer's mouth, that person was fired on the spot.  Excessive?  Probably.  But it bespoke a kind of integrity and seriousness that may be in short supply these days.  Nevertheless, such an attitude might go far, if turned into data-protection protocols, toward preventing the kind of thing that happened to Twitter.

Twitter recovered, after some embarrassing publicity.  The alleged culprits were caught, and now people can follow the Kardashians or whoever without fear of getting spurious tweets from them.  So maybe the price of an occasional hack is worth the laid-back atmosphere that allowed a seventeen-year-old to make a fool out of a famous social-media company.  To prevent hacks like this in the future, organizations like Twitter may have to implement rules that are inconvenient or even harsh.  But with great privileges come great responsibilities, and that may be a lesson a lot of us have yet to learn.   

Sources:   The Associated Press article by Kelvin Chan on the arrest of Clark and company was carried by several news outlets, including  I also referred to an an article at  The detail about Pascal's proof in coal dust is from Wikipedia's "List of child prodigies" and the Time article on teenage brains can be found at

Monday, July 27, 2020

Will Online Learning End College As We Know It?

This topic comes close to home, as I am a college professor preparing to teach most of my classes online during the coming fall semester, as thousands of other college classes will be taught during the COVID-19 pandemic.  While I am not privy to my university's decisions about what tuition to charge, I tend to agree with a reporter at Forbes  named Stephen McBride that if students are asked to pay the usual tuition and fees for a vastly reduced online experience, many will feel shortchanged.  And with good reason.

As Denise Trauth, president of Texas State University, has emphasized in her messages to faculty and staff, the in-person part of education is an essential aspect of what it means to go to college.  And that is why she has tried to arrange to offer as many in-person classes as possible this fall, consistent with social distancing and limiting classroom occupancy to 50% of normal.  But such measures are temporary at best, and what McBride is saying is that the transition to online classes is going to expose conventional providers of higher education to extreme competition from startups that will deliver the same degree in the same way for much lower costs.  And that will cause the overpriced higher-education bubble to pop at last.

McBride points out that over the last few decades, the cost of a college education has vastly outstripped inflation.  One factor that he doesn't address directly is the widespread availability of federal-backed student loans.  When the student-loan money spigot was turned on, colleges figured out a way to vacuum it up, and the result is increased costs.  The goal of making college available for more people was met, but at the price of tuition inflation. 

And middle-of-the-road state universities such as Texas State rely increasingly heavily on tuition and fees as the states steadily reduce the fraction of university income that comes from taxes.  Data from 2017 show that only 28% of Texas State's income was from state sources, while a little more than half (52%) came from tuition and fees. 

If anything comes along to upset the applecart of delicately balanced enrollment and expenditures, universities are ill-equipped to do significant budget cutting.  There are not many companies around that have a large fraction of their core employees (read:  tenured professors) with what amount to lifetime-guaranteed jobs until they feel like retiring.  So to make a significant budget cut at a large university, administrators have found that the most effective means is to create an atmosphere of impending doom with threats of closing entire departments.  I know this from experience at my previous institution, which shall remain nameless here but was in Massachusetts, and wasn't Harvard or MIT.  In such a poisonous environment, many of the good expensive people leave for better places, and the ones left are glad to keep their jobs and will take whatever cuts you give them.

Texas universities have already made budget cuts as requested recently by the state government, which is going through its own fiscal woes as tax revenue declines.  All these details are to say that despite precautions taken to deal with the threat of declining enrollment, the ability of state schools to deal with significant sudden drops is strictly limited, as funding formulas include enrollment and would decline precipitously if fewer students attend.

The kind of education McBride envisions would be all-online and much, much cheaper than a conventional college experience.  His back-of-the-envelope estimate (which conveniently ignores things like labs and accreditation requirements) is that you could deliver college for as little as $3000 a year per student.  I think that is low, but not very low.  If all you want to do is teach people online and you want to be a University of Walmart, I'm sure the thing can be done.  Tenure, research, football, buildings, traditions, commencements, and mascots would go out the window.  But people would still get educated, though mainly by computer, as it would still cost too much to hire qualified humans to interact in any meaningful way with the vast numbers of students each system would have to support to deliver costs that low.

In our rude introduction to Zoom and all the other technologies that allowed us to keep universities running past March of last spring, we instructors learned that online teaching is possible, and sometimes even effective.  By most reports, the students didn't like it compared to being in class.  But they recognized the crisis nature of the situation and cooperated as well as they could. 

If online competition similar to what McBride is talking about materializes in a significant way, he may be right that college will never come back, at least in the form we had up till last spring.  But throwing out the baby with the bathwater is not good for the baby, and I hope that young people trying to decide on their educational future will consider more than cost.

I think it's terrible that so many college students wind up with so much student-loan debt, even though that system has given me the job I have.  As a result of the student-loan windfall, colleges have built up a lot of administrative superstructure that is of questionable use, shall we say.  And maybe some competition from online-only outfits will produce some salutary trimming of what is truly not necessary.  But I think there is genuine value in the experience of living in a community dedicated at least nominally to learning, keg parties notwithstanding.  And it's only fair that people should pay something extra for that experience, but how much extra is an open question.

Every new technology brings with it the question, "Now that we can do it, should we do it?"  COVID-19 has given rise to the possibility of converting college massively to online-only at a much lower price.  I do not know how the public at large will respond if given that opportunity.  I suspect that McBride is right in that higher education is going to see more rapid changes in the next few years than we experienced perhaps in the last few decades.  But I hope at the end of it all, we end up with something that's better for students, better for universities, and better for the country as a whole.

Sources:   I thank my wife for bringing my attention to Stephen McBride's article "Why College Is Never Coming Back" at  I obtained my statistics on Texas State University's 2017 budget from a pie chart at

Monday, July 20, 2020

Twitter Hack Revelation: People Are Still Human

Last Wednesday, followers of the Twitter postings of famous people such as Joe Biden, Elon Musk, and Kim Kardashian all received some variant of the following message, which came from the Apple Twitter feed:  "We are giving back to our community. We support Bitcoin and we believe you should too!  All Bitcoin sent to our address below will be sent back to you doubled!"  This incident has brought to my mind a series of hoary epigrams, and the fact that enough people actually responded to this transparent scam to enrich the hackers by an estimated $110,000 reminds me of the first one:  There's a sucker born every minute. 

Twitter staff responded quickly, first by blocking the accounts on which the fraudulent tweets appeared, and then by briefly freezing the ability of all registered users to tweet anything.  (So for a few minutes on July 15, 2020, we had a Twitter-free world again, but not for long.)  Eventually, Twitter got things straightened out and life went back to what passes these days for normal.

How was this done?  Details are still scarce at this point, but apparently, it began when the hackers mounted what Twitter calls a "coordinated social engineering attack" on the organization.  That's techspeak for a trick like the following:  a bunch of emails or other messages purporting to be from someone in authority and asking for the victim to do something that they normally wouldn't do.  I partly fell for something like this myself once one Saturday when I received an email allegedly from the dean of my college at the university, asking me to contact her.  I emailed back and the hacker then said she was in need of some gift cards for a meeting, and would I please go and buy some and email them to her?  Only then did I realize I was dealing with a scam.

So by some similar means, the hackers were able to access internal Twitter administrative tools.  In other words, they were in the driver's seat and they proceeded to push the pedal to the metal.  First, they located all the famous Twitter names they wished to hack.  (Republican politicians, strangely enough, were apparently immune from this attack, for reasons that remain to be determined—maybe the hackers didn't think anybody would believe Republicans would give away money.)  Then they changed the accounts' email addresses so the real account owners couldn't access their own accounts.  And then the hackers did something really stupid, which was to ask victims to send money to a Bitcoin account.

According to one authority at a law firm that specializes in cryptocurrency matters, U. S. law enforcement authorities can trace Bitcoin transactions pretty well, so the chances that the hackers will get away with their ill-gotten gains for good are not high.  On the other hand, Bitcoin and similar cryptocurrencies are well known for the shady and illegal transactions that people use them for, so it's hard to say what the truth is here as to how easily they can be caught.  Overall, though, people involved with Bitcoin thought the net fallout from this incident would be favorable for cryptocurrency, because as one spokesman said to a Slate reporter, "Can you imagine if an advertiser wanted to ask all of these people to post about their company in one fell swoop? It would be an impossible purchase; you couldn’t even buy that much media."  Which brings to mind the second hoary epigram:  There's no such thing as bad publicity.  That is to say, just getting your name or product before the public is more important than exactly what causes the publicity in the first place, whether it reflects upon you favorably or otherwise.

The next epigram I will bring to your attention sums up what this incident tells us about human nature:  Plus ça change, plus c'est la même chose. ("The more things change, the more they stay the same.")  While the technology in this incident may be new, the aspects of human nature it exploited are as old as humanity itself. 

The hackers, who are simply criminals with some tech savvy, used their knowledge of human nature to get into the Twitter controls in the first place.  No matter how many seminars on computer security you make employees sit through, if your organization is large enough and if the hackers are clever enough, at least one person is likely to have a lapse of judgment when a hacker mimicks an authority figure and asks the victim to do something that would otherwise be against their better judgment.  And one is sometimes all it takes.

And on beyond that, the fact that enough Twitter users were gullible to the extent of sending thousands of dollars' worth of Bitcoin to Joe Biden or Apple or whoever, not stopping to wonder why the object of their admiration would first want them to send cash before returning twice the amount sent—well, it's people like that who keep con artists in business.  And of course, the millions of followers each of the famous people or organizations have, increased the chances that the hackers would find those few very special folks who both had the money and couldn't resist the thought of missing out.

A story in Physics Today, of all places, confirms that even people who are brilliant in one department can nevertheless be duped like anybody else.  Late in life, Sir Isaac Newton was a well-off government official (he ran England's mint) who others sought out for advice about financial investments.  In the spring of 1720, a government-chartered outfit called the South Sea Company (sort of like the British East India Company that profited from colonial trade, but less successful) began issuing stock.  Joint stock companies were a new thing back then, and Newton first bought some South Sea shares, but then decided there was something fishy about the setup and sold his stock, although at a handsome profit.  The South Sea Company operators were basically operating a Ponzi scheme, but as they were some of the first to hit on the idea of paying off investors who were promised high returns with the money from sales to later investors, few people other than Newton smelled a rat. 

All through the summer of 1720, South Sea stock soared, and the psychological pressure of seeing other people apparently getting rich from their purchases proved too much for Newton, who turned around and put almost all his free cash into the stock again in June and July.  In August, the bubble began to burst, and by the end of September Newton had lost his proverbial shirt, along with everybody else who hadn't gotten out in time.  So even the most brilliant scientific mind of the eighteenth century was taken in by a stock scam.

That may not make anybody who sent a thousand bucks to Kim Kardashian in hopes of financial gain feel much better.  But it confirms the fact that human nature hasn't changed that much in three hundred years, and whether the means are goose-quill pens or Twitter accounts, this final epigram is still true:  If it looks too good to be true, it probably is.

Sources:  I referred to articles on the Twitter hack and scam that appeared in Slate at and the website at  I also referred to the Wikipedia article on Twitter.  Andrew Odlyzko's article "Isaac Newton and the perils of the financial South Sea" appeared in the July 2020 issue of Physics Today, pp. 30-36.

Monday, July 13, 2020

The COVID-19 Crisis: Not Enough Technology to Go Around?

In a modern industrialized society like the U. S., we tend to take certain things for granted.  One of these things is that if someone needs emergency medical care, that care will always be available.  The COVID-19 pandemic is calling that assumption into question.

For a time in late spring, many hospitals in New York State were overwhelmed by COVID-19 patients who needed ventilators to keep from dying.  Even with ventilators, many died anyway, and it took weeks for the healthcare system there to recover to the extent that it could handle its normal emergency traffic along with the extraordinary COVID-19 patient load.  In Texas, where I'm writing this on July 12, we are currently being warned that if the COVID-19 infection rate continues to rise like it has in the last few weeks, we may be in a similar situation with maxed-out hospitals and the need to set up emergency wards in convention centers. 

Broadly speaking, a modern healthcare system is a technology in the same sense that a postal system is a technology.  It involves machinery, to be sure, but it also involves complex human relationships, states of training, and command structures that are just as essential as MRI machines and ventilators.  It takes a huge amount of resources in money, time, and investments of lifetimes of training and practice to develop the capabilities represented in a modern hospital.  So it's not surprising that when demands are placed on it that it wasn't designed for, you run into problems.  But the problems you run into aren't just failures of equipment.  It's things like what happened to Michael Hickson at St. David's South Austin Medical Center in Texas.

Until three years ago, Mr. Hickson was a reasonably healthy husband and father of five children.  In 2017, he had a heart attack while driving his wife to work, and suffered permanent brain damage from lack of oxygen before he received emergency treatment.  The injury left him a quadriplegic and in need of continuous medical care, which he was receiving at an Austin nursing and rehabilitation center when he tested positive for COVID-19 on May 15.  He ended up in St. David's ICU on June 3, and on June 5 the hospital informed Mrs. Hickson that he wasn't doing well. 

That day at the hospital, she had a conversation with an ICU doctor regarding her husband's care.  The situation was complicated by the fact that she had temporarily lost medical power of attorney to a court-appointed agency called Family Eldercare.  Someone recorded this conversation, and it makes for chilling listening and reading (the YouTube version is captioned).

When Mrs. Hickson asks why her husband isn't receiving a medication that can alleviate symptoms of COVID-19 and being considered for intubation, the doctor explains that her husband "doesn't meet certain criteria." 

The doctor explains that doing these things probably wouldn't change his quality of life and  wouldn't change the outcome.  When she asks him why the hospital decided these things, the doctor replies, " 'Cause as of right now, his quality of life . . . he doesn't have much of one." 

Mrs. Hickson asks who gets to make the decision whether another person's quality of life is not good.  The doctor says it's definitely not him, but the answer to the question about whether more treatment would improve his quality of life was no.

She asks, "Being able to live isn't improving the quality of life?"  He counters with the picture of Mr. Hickson being intubated with a bunch of lines and tubes and living that way for more than two weeks, but Mrs. Hickson gets him to admit that he knows of three people who went through that ordeal and survived.  She tells him that her 90-year-old uncle with cancer got COVID-19 and survived. 

His response? "Well, I'm going to go with the data, I don't go with stories, because stories don't help me, OK?"  Toward the end of the conversation, he says, ". . . we are going to do what we feel is best for him along with the state and this is what we decided." 

The next day, Mr. Hickson was moved to hospice care.  According to Mrs. Hickson, there they "withdrew food, fluid, and and any type of medical treatment" from him, and he died on June 11, despite his wife's attempts to gain medical power of attorney back from the court-appointed agency.

There are at least two sides to this story, and in recounting this tragedy I am not saying that the Hicksons were completely in the right in all regards, nor that the hospital, its doctors, or Family Eldercare was completely in the wrong.  But clearly, the hospital was under pressure to allocate its limited resources to those who would benefit from them the most.  And it fell to the unhappy ICU doctor to explain to Mrs. Hickson that her quadriplegic, brain-damaged (and maybe I shouldn't mention this, but he was also Afro-American) husband was going to be left behind in their efforts to help others who had what the hospital and the state determined were higher qualities of life.

It isn't often that conflicting philosophies clash in a way that gets crystallized in a conversation, but that happened when the doctor said, "I'm going to go with the data, I don't go with stories."  In going with the data, he declared his loyalty to the science of medicine and its supposed objective viewpoint that reduces society to statistics and optimized outcomes.  In refusing to go with stories, he rejected the world of subjectivity, in which each of us is the main character in our own mysterious story that comes from we know not where and ends—well, indications are that the Hicksons are Christians, so their conviction is that their stories end in the Beatific Vision of the face of God.

But Mrs. Hickson would have been willing to look into the face of her beloved husband for a little longer.  Unfortunately, the ICU doctor and the state had other ideas.  Mr. Hickson might have died even if he had received the best that St. David's could offer.  But the lesson to engineers in this sad tale is that the best designs at the lowest price mean nothing if the human systems designed to use medical technology fail those that they are intended to help. 

Sources:  I read about this incident at the website of National Review at and  The recorded conversation between Mrs. Hickson, the ICU doctor, and a friend of Mrs. Hickson's can be heard at

Monday, July 06, 2020

Is The Computer World Getting Beyond Repair?

Louis Rossman runs the Rossman Group, a team of about a dozen computer repair people in Manhattan.  Recently he was interviewed on the topic of the "right to repair," a concept that is of intense interest both to technicians employed in the repair industry and to anyone whose computer or phone goes on the blink, and is not prepared to chuck it and buy a new one. 

In the interview, Rossman describes the many ways that companies like Apple make it hard for anyone outside of Apple (and often inside too) to repair their ubiquitous devices.  For starters, a blanket of proprietary exclusion conceals useful information such as schematic diagrams, part numbers and identification, and diagnostic software.  Rossman has developed back-channel connections with engineers who work for the companies whose products he tries to fix, and sometimes can get the information he needs that way.  But he says that repair organizations shouldn't have to resort to legal gray areas such as under-the-table schematics to fix things.

Another obstacle arises when companies intentionally make their designs hard to fix.  Rossman said that some firms go to the trouble of pairing certain hardware chips with the particular computer they are installed in.  The machine would run just as well without this feature, which from the repair viewpoint is a bug.  All it does is prevent anyone from fixing the machine if that chip breaks, because even putting in a new chip won't work because the new chip is no longer paired to the machine, and it won't run.  Designs like this are aimed specifically at making the unit harder to repair.

Rossman devotes a small amount of time to promoting legislation and awareness publicity concerning the right to repair, but he isn't optimistic that huge changes will occur.  In an era when large corporations have skewed the intellectual-property field steeply in their favor, it's hard for small, independent operators like Rossman to gain a hearing.  And Rossman himself doesn't want too much government interference, as he is personally inclined toward libertarianism. 

If consumers were simply more aware of what was being done to make the products they buy harder to fix, they might make better choices, and repairability might become a cultural value like sustainability.  And the connection between the two is closer than you think.  For every computer (or phone, or car) that is fixed and stays in service, there is a new device that doesn't have to be made and sold, and the day is put off when that device ends up in a dump somewhere, as most electronics does despite increasing efforts at recycling. 

The simplistic view of this from the manufacturer's standpoint is that repair, especially by someone other than the company that made the product, simply cuts into sales of new units, and is to be discouraged as a pernicious legacy habit that consumers can eventually be trained to break.  But this is not the only way to go, as the automotive industry has abundantly demonstrated.  The average internal-combustion-engine car needs more maintenance and repair as it ages, and if you get attached to your old car (as I do), the inside of a car repair shop becomes very familiar. 

Perhaps if computers and phones had visible odometers like cars (and buried somewhere in the software, they probably do), it could become a point of pride to show how many hours your machine has racked up. 

But fixing things and keeping them for a longer time than absolutely necessary is to buck a trend that has been going mostly the other way since the beginning of the Industrial Revolution, when cheap machine-made products helped to bring millions up from poverty into the middle classes, both as employees of factories and as consumers of the products they made.  I'm not in favor of poverty, nor am I calling for a return to the bad old days when a watch was built mostly by hand, and a man expected it to last most of his life for the simple reason that most people couldn't afford to buy more than one watch in a lifetime. 

With Rossman, I'm in favor of reasonable accommodations for small repair firms and even individuals who are willing to crack open a computer or phone and see if they can fix it rather than simply getting a new one.  Such behavior is good stewardship of the built environment, which includes computers and phones.  Waste is not one of the seven deadly sins, but it's not a good thing either.  And when a fiendishly complex thing like a computer, with all its billions of coordinated parts that do amazing things, gets tossed in the trash simply because one of those parts falls down on its job, the world in general takes a hit that is not that noticeable, perhaps, but is significant nonetheless. 

Years ago I saw a TV episode of The Twilight Zone, I think it was, in which the writers wanted to show a man who lived a deplorably extravagant lifestyle.  So five minutes after the man picked up a new car, he pulled back into the dealership and said, "Hey, I want to trade this in on a new model."

When the salesman asked him why, the man said scornfully, "The ashtrays are full!"  (For those below a certain age, most cars used to have ashtrays, and most drivers used to smoke.)

Throwing away a complex piece of electronics simply because one part breaks that could be easily repaired (given enough information from the manufacturer) isn't much better than trading in a car because the windshield is dirty.  While there are plenty of other things to think about these days, perhaps people will use the extra time they have on their hands to take up a new occupation such as electronics repair.  And maybe the rest of us could look into how easy or hard our next piece of gear is to fix, and let repairability be a factor in our choices.  It would make Louis Rossman's job easier if companies recognize the value of repairability and start doing something about it.  And to let things go the way they're headed, which is a world where repairing stuff is unheard of, would be a waste in more ways than one. 

Sources:  The article "A Computer-Repair Expert Takes On Big Tech," appeared on the website of National Review on July 1, 2020 at  I most recently blogged on the right to repair on Nov. 6, 2017 at

Monday, June 29, 2020

A Burning Question: Trees Into Electricity?

Ancient humans probably learned first about fire by watching a forest burn.  One would think that in this era of nuclear and solar energy, the very old-fashioned alternative of burning wood for power is passé, but one would be wrong.  A recent article on the Wired website points out that biomass-fueled power plants are enjoying a comeback both in the U. S. and Europe, but for different reasons.  And the reasons are controversial.

Burning wood releases the greenhouse gas carbon dioxide, so other things being equal, generating electricity with solar or nuclear power is ecologically friendlier for that reason alone.  However, every tree on the planet has a natural life cycle, and before humans came along, the fate of many trees was to perish in a lightning-ignited forest fire.  We now know that such fires are a normal way for forests to renew themselves, and nature is not taken by surprise when a forest burns.  A few years later seedlings have sprouted into trees and the scars are largely healed. 

But in places like California, where residents of forested areas have promoted fire-prevention efforts that allow a buildup of dead trees and underbrush, the inevitable fires that nevertheless result can prove even more devastating than if people had just left nature to itself.  So a movement has arisen in that state to cut down dead trees and burn them in biomass plants, so much so that California leads the nation in the number of biomass-to-electricity facilities.

At first glance, this looks like a win-win situation.  The forests are better managed with those dead trees pruned away, the electric grid gets some much-needed power plants, and the local job markets benefit through the creation of labor-intensive logging and chipping activities.  But critics point out that burning any kind of biomass has a carbon footprint we could avoid, and the carbon sequestered in dead trees doesn't contribute to global warming.

I suppose somebody could get a grant to figure out exactly what mix of benign neglect, active harvesting of dead or even living trees, and biomass energy production would lead to the optimum of electricity and minimum carbon footprints, but even if you could figure it out, other factors would intervene before you could optimize things. 

Such factors include politics, both domestic and abroad.  In the Southeast U. S., where attitudes toward forests are more commercial than esthetic, it turns out there is a booming business in planting and harvesting pine forests to make wood pellets for export to Europe.  In a controversial decision, the European Union decided to designate biomass-fueled power plants as renewable energy, and now European countries are importing lots of wood pellets from the U. S. to burn for electricity.

Back when we lived in New England a couple of decades ago, a friend of ours started a business selling wood-pellet stoves for home heating.  As long as the pellets were made locally, they were cheaper per heating unit than fuel oil, which was the only alternative for many homes.  But somehow I doubt that shipping wood pellets across the Atlantic is as cost-effective as shipping oil, or even coal.  But it's renewable, and that label is valued increasingly by an ecologically-conscious public willing to pay more for it.

If you consider the life cycle of a particular tree, there is a good but not certain chance that it will perish in a forest fire some day.  In prehistoric natural forests, this fate was probably more common than it is today in California's fire-protected forests, but as recent years have shown, it's impossible to prevent all forest fires.  And when an artificially-protected forest choked with dead trees and dry underbrush does catch fire, the resulting conflagration can be a lot worse than if we had just walked away from the place a few dozen years ago and let nature do its own burning at its own pace.  But people with million-dollar homes in the middle of a forest don't want to do that, and so you get the situation that California faces now, where many forests resemble powder kegs waiting for a match.

If you look at the situation from a sustainable-energy perspective, it seems to me that biomass energy fits the description better than many other so-called sustainable options.  Over the long term, here's what happens.  Trees use sunlight, water, carbon dioxide, and a few other things to make cellulose.  Either before or after the tree dies, people come along and chip up the tree and burn it for power, releasing the carbon dioxide back into the air.  But other trees will come along some day and grab that same carbon dioxide and repeat the cycle.  Sounds pretty sustainable to me.

One practical problem in the way of going completely biomass for our electricity is that biomass plants don't scale very well.  Just as an example, the largest biomass plant in Texas has a capacity of only 100 megawatts (MW).  The smallest natural-gas plant in Texas has a capacity of 176 MW, and the largest can put out 2051 MW, comparable to the two nuclear plants in Texas.  The fact of the matter is that it takes a whole lot of wood chips to make not that much energy, and so far, most biomass plants in the U. S. have been built not simply to produce power, but to achieve other ends as well:  reduction of dead-tree mass, employment, and so on. 

So we probably shouldn't envision a future in which all our power comes from burning trees.  There just aren't enough trees to go around for that.  But in situations where labor, forestry policies, and politics coincide, biomass energy can both make sense and do some good.  It's not all good, but it's not all bad either, like most things in life.  And in burning wood for fuel, we are doing something that humanity has done since the dawn of time. 

Sources:  The Wired story by Jane Braxton Little entitled "The Debate Over Burning Dead Trees to Create Biomass Energy" appeared at on June 27, 2020.  I also referred to the Wikipedia article "List of power stations in Texas" and some websites promoting the economy of wood pellets over oil, such as