Monday, November 29, 2021

Judging the Judgments of AI

 

If New York City mayor Bill De Blasio allows a new bill passed by the city council to go into effect, employers who use artificial-intelligence (AI) systems to evaluate potential hires will be obliged to conduct a yearly audit of their systems to show they are not discriminating with regard to race or gender.  Human resource departments have turned to AI as a quick and apparently effective way to sift through the mountains of applications that Internet-based job searches often generate.  AI isn't limited to hiring, though, as increasing numbers of organizations are using it for job evaluations and other personnel-related functions. 

 

Another thing the bill would do is to give candidates an option to choose an alternative process by which to be evaluated.  So if you don't want a computer evaluating you, you can ask for another opinion, although it isn't clear what form this alternative might take.  And it's also not clear what would keep every job applicant from asking for the alternative at the outset, but maybe you have to be rejected by the AI system first to request it.

 

In any case, New York City's proposed bill is one of the first pieces of legislation designed to address an increasingly prominent issue:  the question of unfair discrimination by AI systems. 

 

Anyone who has been paying attention to the progress of AI technology has heard some horror stories about things as seemingly basic as facial recognition.  An article in the December issue of Scientific American mentions that MIT's Media Lab found poorer accuracy in facial-recognition technologies when non-white faces were being viewed than otherwise. 

 

Those who defend AI can cite the old saying among software engineers:  "garbage in, garbage out."  The performance of an AI system is only as good as the set of training data that it uses to "learn" how to do its job.  If the software's designers select a training database that is short on non-white faces, for example, or women, or other groups that have historically been discriminated against unfairly, then its performance will probably be inferior when it deals with people from those groups in reality.  So one answer to discriminatory outcomes from AI is to improve the training data pools with special attention being paid to minority groups.

 

In implementing the proposed New York City legislation, someone is going to have to set standards for the non-discrimination audits.  Out of a pool of 100 women and 100 men who are otherwise equally qualified, on average, what will the AI system have to do in order to be judged non-discriminatory?  Picking 10 men and no women would be ruled out of bounds, I'm pretty sure.  But what about four women and six men?  Or six women and four men?  At what point will it be viewed as discriminating against men?  Or do the people enforcing the law have ideological biases that make them consider discriminating against men to be impossible?  So far, none of these questions have been answered.

 

Perhaps the best feature of the proposed law is not the annual-audit provision, but the conferral of the right to request an alternative evaluation process.  There is a trend in business these days to weed out any function or operation that up to now has been done by people, and replace the people with software.  There are huge sectors of business operations where this transition is well-nigh complete. 

 

Credit ratings, for example, are accepted by nearly everyone, lendors and borrowers alike, and are generated almost entirely by algorithms.  The difference between this process and AI systems is that, in principle at least, one can ask to see the equations that make up one's credit rating, although I suspect hardly anyone does.  The point is that if you ask how your credit rating was arrived at, someone should be able to tell you how it was done.

 

But AI is a different breed of cat.  For the newest and most effective kinds (so-called "deep neural networks") even the software developers can't tell you how the system arrives at a given decision.  If it's opaque to its developers, the rest of us can give up any hope of understanding how it works. 

 

Being considered for a job isn't the same as being tried for a crime, but there are useful parallels.  In both cases, one's past is being judged in a way that will affect one's future.  One of the most beneficial traditions of English common law is the custom of a trial by a jury of one's peers.  Although trial by jury itself has fallen on hard times because the legal system has gone in for the same efficiency measures that the business world goes for (some judges are even using AI to help them decide sentence terms), the principle that a human being should ultimately be judged not by a machine, but by other human beings, is one that we abandon at our peril.

 

Theologians recognize that many heresies are not so much the stating of something that is false, as they are the overemphasis of one true idea at the expense of the other true ideas.  If we make efficiency a goal rather than simply a means to more important goals, we are going to run roughshod over other more important principles and practices that have given rise to modern Western civilization—the right to be judged by one's peers, for example, instead of by an opaque and all-powerful algorithm. 

 

New York's city council is right to recognize that AI personnel evaluation can be unfair.  Whether they have found the best way to deal with the problem is an open question.  But at least they acknowledge that all is not well with an AI-dominated future, and that something must be done before we get so used to it that it's too late to recover what we've lost.

 

Sources:  The AP news story by Matt O'Brien entitled "NYC aims to be first to rein in AI hiring tools" appeared on Nov. 19 at https://apnews.com/article/technology-business-race-and-ethnicity-racial-injustice-artificial-intelligence-2fe8d3ef7008d299d9d810f0c0f7905d.  The Scientific American article "Spying On Your Emotions" by John McQuaid (pp. 40-47) was in the December 2021 issue.

Monday, November 22, 2021

Does Money Trump Ethics in Architecture?

 

Not too long ago, I was talking with an alumnus of the University of California at Santa Barbara, who mentioned plans for a strange new dorm on campus.  When I was attending Caltech in the 1970s, I had the privilege of visiting UCSB, and it has classic California vistas:  beautiful beaches, palm trees, hills in the distance.  You'd think that any dormitory there would take advantage of the view and at least allocate one window per dorm room. 

 

Well, Charlie Munger doesn't think so.  Mr. Munger, an investing associate of Warren Buffett, is a billionaire who has donated $200 million to UCSB to build a new dormitory for 4500 students, which would make it one of the largest campus dormitories in the U. S.  But he is saying that the dorm has to be built to his specifications, which are eccentric, to say the least.

 

For one thing, most of the rooms where people actually live will not have any windows.  Mr. Munger says they can have TV screens like staterooms in Disney cruises have—a kind of artificial view that can show anyplace in the world, presumably.  The building will have windows, but they open into common areas, not individual dorm rooms.  And so far, the plan is for the building to have only two entrances. 

 

Architect Dennis McFadden has served on UCSB's building design review committee for about fifteen years.  But he resigned when the university overruled his opposition and accepted Munger's plans.  In his letter of resignation, he said that “[t]he basic concept of Munger Hall as a place for students to live is unsupportable from my perspective as an architect, a parent and a human being."  That about covers the field.

 

I'm assuming that the university has not yet run the plans by the Santa Barbara building-code enforcement authorities.   When they do, I think the fire marshal will have a few things to say about plans for 4500 students to exit in a few minutes through only two doors.  Maybe Mr. Munger won't mind the university hanging fire escape stairs on the exterior, like you used to see in old noir movies of nighttime chase scenes in Manhattan.  But that's an issue for another day.

 

The more fundamental question here is, how loudly does money talk when it comes to building a fairly permanent thing like a dorm that promises to create problems?  If we judge by UCSB's actions so far, you'd have to say it talks pretty loud. 

 

It's one thing if an eccentric millionaire leaves her dog a $12 million trust fund, as the late Leona Helmsley did for her Maltese, appropriately named Trouble.  But buildings in general, and dormitories in particular, affect the lives of thousands of people over their useful lifetime. And while I doubt that anyone staying in Munger Hall will actually die from it (unless they really do build it with only two exits and there's a fire), they may not grow weepy at their thirtieth class reunion remembering how wonderful it was to wake up every morning to a TV screen instead of an ocean view.

 

In The Aesthetics of Architecture, the late philosopher Roger Scruton tackles the old saw de gustibus non est disputandum (Latin for "in matters of taste, there can be no disputes.")  He disputes learnedly through some three hundred pages that just as there are right and wrong ways to behave in society and to execute works of fine art, there are right and wrong ways to design buildings—not simply from a safety point of view (which goes without saying) but from an aesthetic point of view as well. 

 

If UCSB builds Munger's dorm exactly the way he wants it built, it won't fall down.  And with appropriate fire codes observed, it won't be dangerous to live in.  But judging from the exterior artist's renderings posted online, the building can't avoid looking like what it is:  a big box to house as many students as possible in as small a volume as possible. 

 

We have something similar on my campus at Texas State University.  Called Tower Hall, it is a monolithic oblong block dotted with tiny four-foot-square windows, and from the outside (I've never been inside, although my nephew survived a semester in it) looks like a nice, well-designed prison.  I'm not sure what combination of poor judgment, penny-pinching, and administrative absence of mind led to the construction of Tower Hall, but fortunately the mistake has never been repeated, and since the advent of our current President Trauth twenty years ago, she has imposed a pleasing Romanesque style on any building built during her watch. 

 

Engineers, and possibly even some architects, spend little time or effort to consider the long-term effect of the appearance and aesthetics of their works on the spirits of those who use them.  Americans are not accustomed to think in terms of decades, and perhaps the enticing bait of a $200 million donation beguiled the UCSB authorities to throw good judgment to the winds and agree to Mr. Munger's plans for a building that will ultimately cost more than five times that amount.  Such issues have imponderable effects that may not seem important, but accumulate over time. 

 

If we're looking for bad examples, I will hold up for inspection the school I served for seventeen years:  the University of Massachusetts Amherst.  While it is a highly regarded research institution that would be better known if it were not in the shadows of Harvard and MIT, the Amherst campus is a collection of nearly every major architectural style used for public buildings in the U. S. from 1880 to 2020, from a faux-Gothic chapel to a 1960s-bunker-style administration building to a 26-story library that is wildly out of place in a small New England town.  It shows what happens when every administration wants to make its presence known by doing something different than the last one did with whatever money comes to hand.  Although there were other factors involved, I blame the architecture of that campus for at least some of the most depressing days I have ever experienced in my life.

 

I recommend that the building planners at UCSB take an extended field trip—in February, let's say—to UMass Amherst to see what happens when other considerations prevail over aesthetic ones in architecture.  If that doesn't change their minds, nothing will.

 

Sources:  The website ArchDaily carried Kaley Overstreet's article "Health, Safety, and Welfar:  What Happens when Design Trumps Ethics?" on Nov. 14 at https://www.archdaily.com/971880/health-safety-and-welfare-what-happens-when-design-trumps-ethics.  I also referred to an NBC news article at https://www.nbcnews.com/news/uc-santa-barbara-mega-dorm-munger-hall-rcna4401, and Wikipedia articles on Charlie Munger and Leona Helmsley.  Roger Scruton's The Aesthetics of Architecture was published in 1979 by Princeton University Press.  I thank Michael Cook of Mercatornet.com for drawing my attention to the ArchDaily article.

 

Sunday, November 14, 2021

Aristotle Questions Innovation

 

Innovation—the introduction of new technologies, systems, and ways of doing things—is such a fundamental part of modern engineering, and life in general, that we rarely stop to question it.  One person who has, however, is Gladden Pappin, a professor of political science at the University of Dallas.  In the December issue of First Things, he describes how innovation has become an automatic part of modern Western civilization, and why this is not necessarily a good thing.

 

Pappin is no Luddite (the Luddites were allegedly followers of nineteenth-century Ned Ludd, a weaver who protested job losses to factory production).  Pappin recognizes the benefits of modern sanitation, medicine, electric power, and other technologies that were largely in place by the end of the 1900s.  But he sees our continual fascination with, and enslavement to, constantly new things as unthinking and possibly harmful. 

 

He uses Uber as an example of an apparently novel enterprise which is really just a variation on the well-known theme of taxi companies.  Uber managed to outgrow conventional taxi firms by cutting pay and benefits to drivers in order to beat the established firms' prices.  As a result, you have a lot of overstressed gig workers who try to hold down two or three jobs in order to survive, and a company that has never yet made a profit.  Pappin claims that this kind of "creative destruction" (in Joseph Schumpeter's phrase) is far more destructive than creative.

 

Although Pappin doesn't mention social media in detail, it is another example of an innovation that is essentially a repackaging of a prehistoric idea:  gossip.  But social media put gossip on steroids, enabled key gossipers to gain a worldwide audience, and profits mightily thereby, but at an unknown and probably negative net cost to the body politic.

 

Pappin looks to the ancient Greek philosopher Aristotle for guidance about how a regime should treat innovation with a mind toward self-preservation.  It turns out that Aristotle gave a lot of thought to political change, devoting an entire chapter of his book Politics to it. 

 

As you might expect, Aristotle starts out with some common-sense notions.  Before a regime permits a change, it should consider whether the change will truly improve things, or whether it's just a change for change's sake.  Admittedly, that is sometimes hard to tell in advance.  But keeping in mind the ultimate goal—the preservation of the regime—it only stands to reason that things that might tend to harm it should be restricted, or even not allowed in the first place.

 

One of the boldest recommendations Pappin draws from this general principle is that "[w]e should, for instance, consider state actions to limit the destructive 'innovations' of modern firms."  For example, brick-and-mortar stores have suffered or disappeared as a result of online retailing.  A suitably-scaled tax on online shopping could fix that.  Social media companies have thrived by staying several steps ahead of the sluggish democratic legislative process.  An energetic legislative and executive effort to get ahead of them could work wonders in alleviating the distortions, vindictiveness, and even deaths from bullying-inspired suicide that social media now is responsible for. 

 

Pappin is rather short on ideas about how we could get from here to there.  Part of the problem is that the very innovations we would try to regulate have crippled the democratic process by which we would regulate them.  Nevertheless, there is hope in discussions about how Section 230 of the Communications Decency Act could be modified or even eliminated.  Currently, it protects social media companies from being sued because of what third parties put on their websites.  As the recent success of the Texas "fetal heartbeat" law shows, passing laws allowing private citizens to effectively enforce laws rather than making the government do it can, at the least, throw a monkey wrench into corporate and governmental attempts to counter them.  So that might be one of the best methods to approach the problem of social media outlets whose operations do more harm than good.

 

There is of course the danger of going overboard with such regulation.  My standard example of overweening government control of technology is Cuba.  When Castro took over in 1959, he installed a top-down micromanaged regime that essentially froze large sectors of the economy in place.  The fact that no imports of foreign automobiles were allowed eventually made Cuba one of the largest repositories of working vintage cars in the world.  But as the former owner of a 1955 Oldsmobile, I can say that this situation is probably not what the vast majority of Cubans prefer in the way of transportation. 

 

My point is that freezing all innovation can be just as harmful, if not more harmful, that allowing all innovation.  And Aristotle himself would probably agree that the optimum situation lies somewhere between the extreme poles of total government control of anything novel coming into the economy, and complete passivity in the face of unbridled competition manipulatted by an oligarchy of the wealthiest few, which is pretty much what we have now. 

 

As a Christian, Pappin ends his piece with a call for family-friendly innovations that would go so far as to pay cash to people who want to raise larger families.  But there's nothing exclusively Christian about this idea.  In another article in the same issue, an economist points out that the West in general is entering a period of demographic decline that could be extremely destabilizing.  Again, simple common sense says you can't have a nation of urban singles forever, even if you open the immigration floodgates and hope everyone will get along. 

 

Pappin's call to take a second look at our unthinking "innovation-is-good" attitude is something that goes counter to most corporate policy statements and the can-do engineering state of mind itself.  But just because we can do something doesn't mean we should.  Asking whether an innovation is really going to help society is a different question than asking whether it will help a given company's bottom line.  But the ethical engineer considers both.

 

Sources:  Gladden J. Pappin's "Advancing in Place" appeared in the December 2021 issue of First Things on pp. 25-30, and can be viewed online at https://www.firstthings.com/article/2021/12/advancing-in-place.

Monday, November 08, 2021

Downsides of the Metaverse

 

One important task of the discipline of engineering ethics is to take a look at new technologies and say in effect, "Wait a minute—what could go wrong here?"  Blogger Joe Allen at the website Salvo has done that with the Metaverse idea recently touted by Mark Zuckerberg, when Zuckerberg announced that Facebook will now be known (officially, anyway) as Meta.

 

Allen denies that Zuckerberg was merely trying to distract attention away from the recent bad publicity Facebook has been receiving, and claims that the Metaverse idea is something Zuckerberg and others have been dreaming of for years, especially proponents of the quasi-philosophy known as transhumanism.  What are these dreams?

 

In the Metaverse of the future, you will be able to put on virtual-reality equipment such as goggles or a helmet, and enter an alternate universe fabricated in the same way that the Facebook universe, or the many MMOG (massively multiplayer online games) systems try to do in a comparatively feeble way today.  But the goal of metaverse technology is to make the simulation better than ordinary reality, to the point that you'll really want to stay there. 

 

It's not hard to imagine downsides for this picture.  Allen quotes Israeli author Yuval Harari as saying that mankind's power to "create religions" combined with Metaverse technology will lead to "more powerful fictions and more totalitarian religions than in any previous era."  The Nazis could do no more than put on impressive light shows and hire people such as Leni Riefenstahl to produce propaganda films such as "Triumph of the Will."  Imagine what someone like Joseph Goebbels could have done if he had been put in charge of an entire metaverse, down to every last detail.

 

Impossible?  Facebook and other companies are investing billions to make it happen, and Allen points out that companies are also lobbying Washington to spend federal money on developing the infrastructure needed to support the massive bandwidth and processing power that it will take. 

 

COVID-19 pushed many of us a considerable distance toward the Metaverse when we had to begin meeting people on Zoom rather than in person.  Zoom is better than not meeting people at all, I suppose, but it already has contributed in a small way to a breakdown in what I'd call decorum.  For example, judges have had to reprimand lawyers for coming to hearings on Zoom while lying in bed with no clothes on. And I've talked on Zoom with students who wouldn't dream of showing up in class with what they were wearing in the privacy of their bedrooms, in which I found myself a reluctant virtual guest.

 

Of course, if we had the Metaverse, the lawyer could appear as an avatar in a top hat, tuxedo, and tails if that was what the judge wanted to see.  But the point is that there is a whole complex of social-interaction rules or guidelines that children take years to learn (if they ever do learn), and in a Metaverse, those rules would be set by whoever or whatever is running the system, not just by the individuals involved.

 

Zuckerberg insists, according to Allen, that his Metaverse will be "human-centered."  That may be true, but a maximum-security prison is human-centered too—designed to keep certain humans in the center of the prison.  While Facebook has its positive features—my wife just learned through it yesterday of the passing of an old family friend—the damage it has done to what was formerly called civil discourse, and the sheer amount of bile that social media sites have profited from, show us that even with the relatively low-tech means we currently have, the downsides of corporate-mediated social interactions reach very low points indeed.

 

Does this mean we should jump in with a bunch of government regulations before the genie gets out of the bottle?  Oddly, Zuckerberg is calling for some kind of regulation even now.  But as Allen points out, Zuckerberg may be thinking that eventually, even government power will take a back seat to the influence that the corporate-controlled Metaverse will have over things.

 

Those who see religions as creations of the human brain, and human reality as something to be created from scratch and sold at a profit—these are defective views of what humanity is, as Pope St. John Paul II pointed out with respect to the anthropology of Marxism.  Transhumanist fantasies about recreating the human universe in our image share with Marxism the belief that human beings are the summit of intelligent life, and there is nothing or no One else out there to be considered as we remake the virtual world to be whatever we want it to be.  Even if you grant the dubious premise that the Zuckerbergs of the world merely want to make life better for us all instead of just getting richer, you have to ask the question, "What does 'better' mean to you?"  And whether the machinery is Communist or capitalist, the bottom-line answer tends to be the satisfaction of personal desires. 

 

Any system, human or mechanical, that leaves God out of the picture leads people down a garden path that ends in slavery, as John Bunyan's Pilgrim discovered in Pilgrim's Progress.  Before we are compelled to join the Metaverse in order to earn a living, we should take a very hard look at what those who are planning it really want to do.  Once again, we have a chance to set a new technology on the right path before we let it go on to produce mega-disasters we then have to learn from.  It's the engineers who come up with this stuff, and in view of the lack of interest or even comprehension that government representatives have for such things, perhaps it's the engineers who need to ask the hard questions about what could go wrong with the Metaverse—before it does. 

 

Sources:  Joe Allen's article "The Metaverse:  Heaven for Soy Boys, Hell on Earth for Us" is on the Salvo website at https://salvomag.com/post/the-metaverse-heaven-for-soy-boys-hell-on-earth-for-us.  I also referred to an article on John Paul II's views on anthropology and Marxism at https://www.catholic.com/magazine/online-edition/jpii-anthropology.

Monday, November 01, 2021

Asteroid Mining: What's In the Stars For It?

 

Mining has been a part of civilization ever since there was such a thing as civilization.  And even before manned space travel moved from the realm of speculation to reality, science-fiction writers were imagining mines on Mars or asteroids, and how things might go right, or more typically, wrong. 

 

As part of an exhibition on the future, the Smithsonian Institute recently sponsored some science-fiction writers to come up with brief stories about future technologies, and author Madeline Ashby chose the topic of asteroid mining.  I will leave the artistic judgment of her work to those more qualified than I to assess such things.  But the story she wrote touches on some issues that are only going to get more serious in the coming years, as commercial space flight gets cheaper and asteroid mining becomes a real possibility.

 

In forty words or less, Ashby's story describes the return of a woman to the artists' colony where she was born, and where her father invested some of the colony's money in an asteroid-mining venture that paid off twenty years later.  Right there, we can stop and say something about what would be needed if such an eventuality could occur.

 

First, we will need to establish property rights in space.  Currently, the issue of who owns a piece of extraterrestrial land is, shall we say, up in the air.  If you launch a satellite or manned vehicle into space, you own what you launched.  And I suppose the U. S. government holds practical title to the moon rocks that the Apollo astronauts brought back with them.  (As a side note, it seems we haven't done a very good job of keeping track of those rocks, as Wikipedia has an entire article on stolen and missing moon rocks, especially the ones that President Nixon gave away to foreign countries.) 

 

But the manned moon landings were about as far from a money-making operation as you could get.  When it becomes economically feasible to mount exploration and extraction operations for valuable minerals in space, nobody is going to want to spend the billions necessary to do that unless they can be reasonably sure they will get their money back, and then some.  And the only way that can happen is if there is a stable and predictable legal framework in place to guarantee such conditions, or at least make them reasonably likely.

 

The way humanity has handled this sort of thing in the past is by getting there first and either claiming the property by force, or buying the minerals from the people who were mining it already.  In space, we assume we won't find little green men wearing mining helmets, so the first case seems to be more likely.  Ashby's story imagines the enactment of a universal "right-to-salvage" law in space, and something like this will have to be agreed upon by all interested parties before serious space mining can occur.  Depending on how valuable and critical to the world economy space-mined products become, the legal framework of space mining could become a major threat to world peace if something goes wrong.  Turning the problem over to the U. N. might help, but the U. N.'s track record of settling major disputes is not exactly stellar, so to speak. 

 

Rather than get tangled in policy issues that would take many pages to thrash out, I think I will close with just two examples of how resource extraction can go very wrong, and at least moderately right. 

 

Ivory and rubber are not minerals, but in the 1890s they were both highly valued and to be found mainly in Africa.  In 1885, King Leopold of Belgium managed to convince an international conference to grant him personally a large tract of land on which lived some 30 million people, and which became known (ironically) as the Congo Free State.  Leopold, who never visited the colony, ran it as his personal fiefdom, driving his colonial exploiters and supervisors to grab whatever they could, at whatever human cost, to the point that failure to meet production quotas on the part of native workers often led to amputation of a hand.  After two decades of literally hellish conditions, courageous reporters and writers brought the miseries of the Congo to the attention of the world, and the worst excesses were stopped.  But to this day, valuable minerals such as coltan, from which the tantalum in electronic devices is extracted, continue to be extracted under hazardous, illegal, and exploitative conditions in Africa.

 

Some analysts blame these conditions on something called the "resource curse," which is a pattern shown by some countries with rich natural resources that paradoxically have worse health, economic growth, or development outcomes than neighboring countries without such resources.  The resource curse is not an inevitable effect of mineral wealth, however, and I will close with my example of how things can go better.

 

When oil was discovered in West Texas in the 1920s, it created an unexpected windfall for Texas A&M and the University of Texas.  Along with the founding of these institutions in 1876 and 1883, respectively, the State of Texas made grants of land to the schools, as (uniquely among states joining the Union after 1776) Texas retained ownership of public lands, instead of ceding them to the federal government.  For decades, these grants didn't pay much in the way of dividends except for property sales to the occasional rancher.  But with the discovery of oil, money began to flow into the university coffers in the form of what became the Permanent University Fund, an endowment that as of 2021 amounts to some $32 billion. 

While we can have a debate some other time as to the wisdom of extracting all that oil, the fact remains that a good bit of that oil money has wound up paying for the preservation and extension of knowledge, which is what universities do at their best. 

 

I have no idea where the future of asteroid mining lies, but most likely its consequences will fall somewhere between the two extremes of heartless exploitation and generous beneficence.  Let's hope for more of the latter and less of the former.

 

Sources:  An article in Slate at https://slate.com/technology/2021/10/speculation-short-story-madeline-ashby.html briefly describes the upcoming Smithsonian exhibit and contains the Madeline Ashby story about asteroid mining.  I also referred to Wikipedia articles on the Congo Free State and "Stolen and missing Moon rocks," and obtained the $32 billion figure and the history of the Permanent University Fund from  .

Monday, October 25, 2021

How Did COVID-19 Start?

 

Nearly two years after the initial fatalities that would turn out to be caused by the COVID-19 virus, we still do not know the answer to that question.  But last week, the U. S. National Institutes of Health revealed that it was funding "gain-of-function" research on bat coronaviruses in Wuhan during 2018 and 2019, in direct contradiction to Dr. Anthony Fauci's testimony before Congress that no such research was being supported by NIH there. 

 

In times past, plagues were regarded as simply acts of God, and while people tried to avoid transmitting infections by means of quarantines and travel restrictions, it was rarely possible to pinpoint the exact time and place where a given pandemic began.  But with advances in genetics and biochemistry, infectious agents can often be tracked down and successfully traced to their source, as was done with a localized outbreak of what came to be known as Legionnaires' disease, when it was traced to bacteria harbored in an air-conditioning water system.

 

By all measures, the COVID-19 pandemic has been the worst plague in modern times in terms of economic and social disruptions, casualties, and deaths.  According to the website www.worldometers.info/coronavirus, COVID-19 has been responsible for about 4.9 million deaths worldwide so far.  If for no other reason than to learn from our mistakes, it should be an urgent global priority to discover how the pandemic started, and whether it was by accidental transfer from an animal species such as bats to humans, or by means of deliberate creation of more aggressive viruses than occur in nature and accidental spread from the laboratory that created them.

 

Unfortunately, the COVID-19 pandemic started in a country that has systematically suppressed the information that would help in deciding this question.  But the following facts are known.

 

Shi Zhengli, a viral researcher at the Wuhan Institute of Virology, has worked for years with viruses taken from wild bats, as bats have the peculiar ability to host a wide variety of viruses that can be harmful to other species without themselves becoming ill.  To perform this research, she and her colleagues traveled far and wide to collect samples from bats in remote caves in other parts of China.  She has continued to perform research connected with COVID-19 in China after the pandemic began, and denies that there has ever been an accident in her institute resulting in infection of staff or students. 

 

Wuhan is by all accounts the city where COVID-19 first claimed its victims.  It is the largest city in central China with a population of about 11 million.

 

As we now know, the NIH funded so-called "gain-of-function" research through an organization called EcoHealth Alliance, headed by researcher Peter Daszak, which was conducted in association with the Wuhan Institute of Virology.  Gain-of-function is a bland phrase that means an infectious agent has been enhanced in its ability to infect a host.  While some argument can be made that concocting such viruses is the only way to figure out a defense against them, it is obviously an extremely dangerous thing to do.  Dr. Zhengli admits that prior to COVID-19, much of her viral research was done in lab conditions that were less safe (so-called "BSL-2" and "BSL-3") than the highest-security BSL-4 labs.

 

Let's imagine a different scenario and ask some questions about it.  Suppose nuclear weapons had not yet been invented, but researchers were hot on the track of cracking the secret of nuclear energy.  Suppose also in this contrafactual fantasy world that the U. S. funded some of this research at a center in Sao Paolo, Brazil.  Suddenly one day, a huge explosion happens in Sao Paolo, wiping it off the map and sending radiation into the air that eventually kills a total of 4.9 million people worldwide.  Despite the fact that most of the relevant data to determine the exact cause was vaporized in the explosion, wouldn't it be wise at least to do our very best to figure out what happened, with or without the cooperation of the Brazilian government?

 

In a sense, we already know what to do.  Whether SARS-CoV-2, the virus responsible for the COVID-19 pandemic, actually originated in a lab accident in Wuhan or in an exotic-food market there, we now know that early detection and faster responses to highly contagious new diseases might make the difference between another world-crippling pandemic and a minor contained outbreak. 

 

In the case of COVID-19, the Chinese government delayed for weeks before even publicly acknowledging the magnitude of the problem, and criticized brave medical workers who tried to publicize the seriousness of the nascent epidemic.  In retrospect, this was exactly the wrong thing to do, but it is the natural response of most governments to minimize something that is not yet so obviously awful that denials will look silly. 

 

One hopes that if a similar outbreak happened in, say, Chattanooga, state and federal officials would be more forthcoming than their Chinese counterparts in telling the rest of us about what was going on.  As to the measures that would have stopped the epidemic in its tracks, it seems that only a city-wide 100% quarantine with extreme measures taken to enforce it would have worked, and maybe not even then.  Any government will be reluctant to impose such a draconian measure unless there are very good reasons to do so. 

 

But as things stand, there are still unanswered questions about what other activities EcoHealth Alliance was doing in China that they were supposed to report on but didn't.  Unless the Chinese government suddenly becomes more forthcoming about what really happened in Wuhan, we may never know how COVID-19 really began.  But we certainly know how it spread.

 

In investigations of engineering disasters, future accidents of a similar nature can't reliably be forestalled until the exact mechanism of the one under investigation is understood.  We have part of the picture of COVID-19's origins, but not the whole story.  The best we can do now is to be much more aware of rapidly spreading fatal diseases in the future, and willing to take what may look like extreme measures locally to prevent another global pandemic. 

 

Sources:  National Review carried on its website the article "The Wuhan Lab Coverup" at https://www.nationalreview.com/2021/10/the-wuhan-lab-cover-up/.  I also referred to the NIH letter at https://twitter.com/R_H_Ebright/status/1450947395508858880 and the Wikipedia article on Shi Zhengli. 

Monday, October 18, 2021

Federal Regulators Turn the Heat Up on Tesla

 

On Tuesday, Oct. 12, the U. S. National Highway Traffic Safety Administration (NHTSA) sent a letter to the automaker Tesla telling it to issue a recall notice if a software upgrade to its vehicles involves a safety issue.  This is the latest development in an escalating conflict between federal regulators and the electric-car maker, whose flamboyant CEO, Elon Musk, seems to enjoy twitting regulators of all kinds.  But beyond personalities, what we are seeing here is a conflict between a traditional legal system and a technology that has advanced beyond it.

 

The immediate issue that prompted the letter was an "over-the-air" software upgrade that Tesla made in late September.  The NHTSA had started investigating a number of crashes of Tesla vehicles into emergency vehicles that were parked and had their flashers operating.  The upgrade was intended to improve the ability of Tesla vehicles to avoid hitting emergency vehicles in low-light conditions. 

 

What annoyed the NHTSA was not that Tesla addressed the problem they were looking into, but the fact that they didn't issue a recall notice along with all the formalities and paperwork that are required along with it.

 

On the one hand, Tesla's position makes a certain amount of sense.  Back in the pre-software days when every automotive defect required a trip to the repair shop, the only way to get something fixed was to issue a recall to every traceable owner of every involved vehicle and try to get them all repaired within a reasonable time frame.  But now that software comprises a growing part of the overall system that we still call an automobile, upgrades can be done remotely and silently over the Internet without the owner even being aware of it.  So why go through all the hassle and bother of a recall when the thing can be done without anybody but the automaker knowing about it?

 

On the other hand, the NHTSA seems to think that such so-called "stealth recalls" can lead to problems.  How do car owners know the upgrade has really been done?  How will purchasers of used cars know they're getting a car with all pertinent upgrades?  What if an accident happens before the upgrade is done and there's no public record of whether the particular car involved was upgraded before the accident?  It's issues like these that are presumably behind the NHTSA's insistence that even over-the-air upgrades have to be accompanied by recall notices if they affect safety issues.

 

Of course, which upgrades affect safety is not always an easy matter to determine.  But the upgrade issued to improve the discrimination of emergency-vehicle lights certainly falls in that category.  This is not a trivial matter for Tesla, because if the NHTSA decides the firm has not complied with recall regulations by Nov. 1, they can fine the company $114 million.  Even to Elon Musk, one of the richest men in the world, that is not chump change.  And there are the people called shareholders to consider.

 

Concerns like these are not confined to the NHTSA's dealings with Tesla, as cars from all makers are starting to resemble rolling software platforms rather than the all-mechanical vehicles of old.  The AP report describing the conflict attributes the agency's tougher stance to a general trend on the part of the Biden administration to clamp down on automated-vehicle safety issues, rather than tread lightly for fear of crippling the new technology in its cradle, so to speak. 

 

A dyed-in-the-wool libertarian might criticize the NHTSA for insisting on its bureaucratic rules being followed despite the fact that Tesla was doing the right thing, namely, fixing the emergency-vehicle-light problem that was attracting public attention.  But decades of experience with government regulation of auto safety includes numerous examples, ranging from introducing double-layer safety glass to defective-air-bag recalls, in which it is pretty clear that left to their own devices, the automakers would not have acted to protect the public from the hazards caused by their own products. 

 

The problem I see is not whether to regulate, but how to regulate in light of "over-the-air" recalls that do not require physical trips to a repair shop that are hard to document.  Clearly, the old-fashioned recall regulations need revisiting in the light of current technology, and it's always easier for a bureaucracy to use the regulations it has rather than to rethink them on the fly. 

 

But suppose there had been some kind of National Internet Security Administration in place at the dawn of the Internet, and in the 1990s they'd passed a regulation requiring all web-browser designing firms to write letters (not emails, but letters) to all users whenever an upgrade to their web browser was made that affected security.  That might have made a little sense back when web browsers were upgraded once a year, but anyone using Microsoft products knows upgrades are done almost daily, affecting all kinds of things, including security.  If such a regulation were still in place about notifying users, we'd all be buried in a blizzard of mail from software companies.  This would be a boon for the U. S. Postal Service, but not for anyone else.

 

So clearly, new technologies call for new regulatory processes.  One can imagine some kind of software certification for purchasers of used cars, verifying that the cars they buy have all the software upgrades made up to the date of purchase.  That would require cooperation among the automakers, used-car dealers, state and federal regulators, and customers, but these parties have worked together in the past, and it can happen again.  As for written notifications any time safety-related software is upgraded, it sounds to me a little like the NHTSA is hounding Tesla just for hounding's sake. 

 

In a related concern, there are genuine issues regarding how easy it is for drivers to fool Tesla's autopilot into thinking the driver is paying attention to the roadway when in fact the driver is playing a video game or even sitting in the back seat.  And Tesla needs to be taken to task for that issue.  But it's not good sense to punish a company for doing the right thing, which the NHTSA appears to have done in this case.

 

Sources:  Many news outlets published the Oct. 13 AP story by Tom Krisher which was entitled on the AP website "US regulators seek answers from Tesla over lack of recall," at https://apnews.com/article/technology-business-software-881ef270cbfdc6e4f62b88400588686f, to which I referred.