Monday, November 23, 2020

Technology, Demography, and Destiny


Most people, including most engineers, suspect there is some relationship between the advances in transportation, communications, sanitation, and health care brought about by modern science-based engineering on the one hand, and the tremendous and rapid growth in world population that has taken place since 1800 on the other hand, when there were only an estimated 1 billion people worldwide.  Now there are about 7 billion.  Something happened beginning a couple of hundred years ago that had never happened before in the history of the world, and the effect was to make population soar at an unprecedented rate. 


Whatever your opinion on whether this is a good thing or not, demographer Paul Morland has done us all a favor by writing The Human Tide:  How Population Shaped the Modern World.  The job of a demographer is to study the details of human population statistics:  birth rates, death rates, migration, and their effects and causes in relation to economics, politics, and the rest of life.  So far, so dull, you think?  Not in Morland's hands. 


It turns out that no matter what nation or ethnic group you're talking about, the encounter with modernity (which mainly means modern methods of transportation, communication, etc.) gives rise to what Morland and his colleagues call the first type of "demographic transition."  For most of human history, population was limited both by the scarcity of food and the brevity of human life due to disease and starvation.  In Biblical times, for example, nearly everyone lived on a farm, and married women typically had four or more children so that enough of them would live long enough to become useful farm hands.  Everyone lived in what Morland calls "the Malthusian trap," named after the English cleric and scholar Thomas Malthus (1766-1834).  Malthus said that any increase in the food supply will only tempt people to have more children, and the increased number of mouths to feed more than makes up for the original increase, meaning that near-starvation will be the typical lot of humanity into the indefinite future.


But Malthus had no way to tell that the coming century would bring with it technological improvements in agriculture (steam and gasoline tractors), transportation (railroads, automobiles), public sanitation (clean water, sanitary sewers), and health care (improved pediatric and geriatric medicine), all of which enabled first England, then parts of Europe, the U. S., and other countries to escape the Malthusian trap.  And it turns out that everybody escapes more or less the same way, although the timing varies from place to place.


First, falling infant mortality and increasing lifespans lead to a tremendous boom in population, as women keep having those four or six children they've always had, but most or all of them now survive to adulthood and live much longer lives, into their fifties or sixties.  After a generation or so, especially if the cultural setting encourages literacy and advanced educational opportunities for women, they stop having such large families.  The means by which this happens is something of a mystery, as it involves decisions and behavior that are not easily observed on a mass scale.  But in culture after culture, country after country, even in religions as different as Christianity and Buddhism, the first demographic transition works more or less the same way.


Once the average family size comes down to replacement level (typically about two and a fraction children), some countries move on to what Morland calls the second demographic transition:  a further reduction in the birth rate below the replacement level.  This does not immediately result in an overall population decline, because large numbers of young women may still be growing into childbearing age, immigration into the region may be significant, and many other factors can intervene as well. 


But in some cases such as Japan, the birth rate is extremely low, the overall population is declining, the median age is among the highest in the world, and it is estimated that up to 30,000 elderly Japanese die alone in their homes every year, giving rise to a whole industry that specializes in removing abandoned bodies. 


This is not necessarily the fate that all modern industrialized countries face.  Some countries such as Sri Lanka seem to have stabilized themselves at a comfortable balance with replacement-level birth rates, reasonably long lifetimes, and a fairly constant population figure.  But every country that encounters modern technology eventually goes through at least the first demographic transition.


The book also made me wonder what relationship should obtain between the way large groups of people behave on average, almost regardless of culture or faith, and the ideals of certain faiths, particularly Christianity.  Morland points out that the universality of demographic transitions happens because nearly everybody (a) would rather live longer than die young, and (b) wants the same for their children, however many there are.  So when the technical means become available to achieve these ends, a society adopts them, and eventually quits having six or eight kids per family unless there are extremely strong cultural or religious reasons to keep doing so.  Morland does mention exceptions such as the Jewish Haredim ultra-orthodox sects and Christian groups such as the Amish, who tend to have large families whatever their circumstances are.  But unless such convictions become widespread in the general population, it's unlikely that large families will become the norm in modern industrialized countries.


Is that a moral failing?  Admittedly, there is a wide spectrum of opinion or conviction even within Christianity, ranging from liberal groups that favor abortion rights to conservative elements of the Roman Catholic Church that look not only upon abortion, but on any form of birth control other than "natural family planning" (formerly known as the rhythm method) as sinful.  So in one sense, it depends on who you ask.


What Morland taught me is that while demography isn't all of destiny, it does have a lot to say about the histories and trajectories of regions, countries, and even continents.  Sub-Saharan Africa, for instance, is the only place where the majority of countries are still undergoing their first demographic transition, with extremely fast population growth that has not yet been dampened by that mysterious collective decision to have fewer children per mother.  Whether countries such as Nigeria end up managing their transition well and stabilizing like Sri Lanka, or whether they get mired in the chaos and civil strife that seems to accompany having lots of young unemployed men in your population, is a question that remains to be answered. 


But when the answer comes, people like Paul Morland will have helped us understand how the invisible hand of demography contributes to history in general, and the history of technology too.


Sources:  Paul Morland's The Human Tide:  How Population Shaped the Modern World was published in 2019 by Public Affairs Publishing, New York.

Monday, November 16, 2020

The COVID-19 Vaccine: When, Where, and Who?


Most experts agree that the only thing which will put the current COVID-19 pandemic to rest is some kind of vaccine.  One firm—Pfizer/BioNtech—has progressed to what is called a Phase 3 trial, which involved about 43,000 people who took it with apparently no serious side effects.  There is still a long way to go even with the most advanced projects, because achieving "herd immunity"—enough immune people to discourage the virus from spreading—may require on the order of several billion doses.  And many of the prospective vaccines require two injections spaced weeks apart, which further complicates matters.


Engineers are familiar with tradeoffs that are usually imposed by economic restrictions.  When I was a young engineer just out of college, I was teamed with an older and more experienced engineer, and one day we were talking about various possible ways to tackle a certain problem in a new design we were working on.  I described three or four different ways to tackle it that I thought were pretty clever, but he seemed unimpressed.  Finally, I asked him why he wasn't more excited about these innovative ideas I was proposing.


"Heck, I can build one of anything!  The real challenge is making thousands of them work at a price we can afford."  The harsh realities of the marketplace had educated him to look not just for technically sweet ideas, but for ideas—new, old, or otherwise—that would do the best job for the least money.  That taught me that having clever ideas—or one dose of a highly effective vaccine—is only a small step toward solving a real-world engineering or technical problem.


Making a billion high-quality vaccine doses in a short time is a challenge that hasn't been discussed much so far.  But supposing that vast production problem is overcome, and reliable vaccine doses begin to enter the pipeline, who is going to get them first? 


An interesting study cited by a recent BBC article says that the first doses should go to different groups, depending on how effective the vaccine is.  No vaccine is 100% effective, and this is especially true of virus vaccines.  The annual flu-virus vaccine that millions of people get is rarely more than 60% or so effective, depending on the particular year and the mix of viruses that show up after the vaccine is developed. 


There are different ways to measure the effectiveness of vaccines.  One way is to measure how many people who are vaccinated and then exposed to the virus develop symptoms.  Another way is to measure how likely a vaccinated and exposed person is to spread the disease to others, whether or not they manifest symptoms.  The study's authors point out that if you developed a vaccine that was only 30% effective in preventing symptoms, it would fall below the U. S. Food and Drug Administration's 50% threshold and wouldn't even be approved.  But if it happened to be 70% effective at stopping people from spreading the virus, it would actually do more good than a different vaccine that prevented symptoms with 100% effectiveness but allowed the virus to spread.


That is why there is no single answer to the question, "Who should get the vaccine first?"  If it is most effective in preventing the virus from spreading, then the target population should be the ones who spread it the most.  Currently that appears to be older children and younger adults, say between 10 and 35.  Few people in that group die of the virus, but just because many of them have either mild symptoms or are asymptomatic, they spread it very easily. 


On the other hand, if the vaccine is good at preventing symptoms but not so good at stopping the spread, you probably want to target the population that is most vulnerable to the disease:  people in rest homes and over 65.  That will save the most lives in the short term, while giving us time to vaccinate the rest of the population to approach the goal of herd immunity.


Any way you slice it, we face a very long uphill battle in fighting this disease.  In some countries such as the U. S. and China, the expense of buying and distributing the vaccine is relatively trivial compared to other things the government is doing.  But in poorer countries, vaccinating the majority of the population with anything is a major challenge, and so we can expect the disease to hang around in pockets long after it has been controlled in more economically well-off places.  So the last thing to go may be travel restrictions concerning COVID-19, at least to some countries where it may not be controlled for several more years.


Within a given country, the distribution of the vaccine may be implemented mainly by the government, mainly by private enterprise, or more typically by a combination of the two.  As it is in the interests of every government to free its citizens from the threat of COVID-19, substantially free distribution would seem to be a no-brainer, although there are practical obstacles to that as well.  Certain minority populations have been disproportionally affected by COVID-19, and the U. S. National Academies of Science, Engineering, and Medicine has stated that there is a "moral imperative" to make sure that this imbalance is addressed in any proposed distribution scheme. 


And last but not least, there is the problem that not everybody is going to want to be vaccinated.  We are a long way from the 1950s, when Jonas Salk was universally praised as a god-like hero and millions of U. S. citizens gratefully took their children to receive polio vaccine injections without raising even a quibble concerning its safety.  Nowadays, the pronouncements of experts always inspire somebody on the Internet to say, "Sez who?" and the small but vocal opponents of any kind of vaccination have persuaded lots of people at least to hesitate before believing uncritically anything an expert says. 


Even with all these uncertainties, it does look like we we get a vaccine sometime, and eventually it will begin to slow down the spread of COVID-19.  As far as I'm concerned, it can't come too soon.


Sources:  The BBC published the article "COVID:  How close are we to a vaccine?" on Nov. 12, 2020 at  The New York Times published "Who should get a COVID-19 vaccine first?" at on Nov. 5, 2020. 

Monday, November 09, 2020

Those Disagreeable Inventors

Inventors don't play much of a role on the public stage these days compared to the glory days of Marconi and Edison.  But they are nonetheless vital to modern civilization, as technical progress is the main economic engine that drives advanced industrial societies.  Martin L. Tupy, a senior fellow at the Cato Institute think tank, says in a recent issue of National Review that we ought to be careful how we treat present and future inventors, even if they prove to be rather disagreeable.  And he makes a good case that many of the best ones are just that, and their disagreeability is intrinsic to what makes them good inventors.

Citing several books about psychology, innovation, and DNA, Tupy says successful inventors tend not to care what other people think, and may even take delight in discomfiting their more powerful peers.  It's ancient history now, but the legendary 1984 Apple commercial shown during the Superbowl portrayed a young—woman—wearing bright colors—running freely—as she charges through a gray crowd of drones hypnotized by Big Brother's face on a telescreen, throws a sledgehammer into the screen, and literally busts up everything.  It has Steve Jobs' fingerprints all over it.  Numerous sources show that Jobs, who is probably the leading candidate for the most famous inventor of the latter 20th century, was not an agreeable person.

So why can't inventors just get along with people like the rest of us do?  Tupy contends that those who successfully seek innovative technical solutions to problems also tend to be loners, somewhat socially awkward, and not terribly concerned about fitting in and getting other people to help them with problems.  Rather, they prefer to work with things and ideas on their own to solve problems.  The umbrella phrase for this type of personality is autism-spectrum disorder, which of course can be crippling in its severer forms, although inventors such as Temple Grandin prove that even clinical-grade autism can be overcome.

Over my career, I have met several, and gotten to know a few, inventors who actually profited from their patents, or at least saw the companies or organizations they were associated with profit from them.  Few of them meet the classic description of an autistic personality:  intense aversion to social interaction, preference for solitude, etc.  I would say that while the autism-spectrum observation is true as far as it goes, and it may be close to necessary to some degree, it is by no means sufficient.  And for this I will turn to some history I'm very familiar with:  my own.

 When I got to college, I was surprised to see that someone had made a poster that showed me as a classic nerd.  It wasn't really mea, but it might as well have been:  plastic-framed glasses, button-down sweater, shirt pocket bulging with pens, slide-rule case hung on belt, etc.  I had spent most of my spare time growing up playing with electronics rather than football or socializing.  I never dated in high school.  And I went to college at what was then probably the West Coast's capital of nerd-dom:  Caltech.  If being on the autism spectrum was all it took to be a successful inventor, I should have done fine.

But I think most successful inventors have a drive that I mostly lacked:  a desire to show up the established order and make it look foolish, not by words, but by actions, hardware, and (nowadays) software.  That part of the successful inventor's personality is missing from my makeup.  On the contrary, I tend to revere established institutions and procedures, not delight in their ruination, even if such ruination works to my benefit.  This attitude of reverence toward existing structues is exactly what you don't want if your job is to convince others that your idea is better than theirs.  It's that simple.

My name is on a couple of patents, one of which (obtained with my Ph. D. supervisor at U. T. Austin in the 1980s) could conceivably have become quite valuable, as it anticipated the future growth of what is known as RFID technology—the little tags that set off alarms if you try to shoplift a pair of sneakers from Walmart.  But as it happened, the university that paid for the patent didn't do anything with it, and neither my supervisor nor I had the time or inclination to do the hard work of convincing people that this was the coming thing.  It would have involved starting a company, and that was not on my scope screen at the time, nor has it ever been since.

The reason Tupy wrote what he did was to make the point that societies which discourage disagreeableness of the type in question may be shortchanging themselves when it comes to innovation.  Nobody knows how to create inventive people.  It's like farming:  the farmer doesn't really grow anything.  He or she just creates conditions under which growth of desirable plants can occur.

So cultures that allow people to do things differently, to play around with ideas without having to worry about getting in trouble with their peers or the government, tend to be cultures in which innovation and invention thrive.  A good contrast here is between the U. S. in the 1950s and the old USSR (Soviet Union), where everyone had to be constantly on guard lest they be heard to say something even slightly negative about the government, at which point their neighbors might rat on them and they'd end up in the Gulag for twenty years.  The USSR was not a hotbed of technical innovation then, although it supported scientists who aided its nuclear-weapons program.  But as far as economically profitable inventions go, it was no contest, as the U. S. was far and away the best place to be for that kind of thing, even in the allegedly conformist and repressive 1950s.

By all means, let's preserve what freedoms we have, to allow those cranky inventors among us to be by their lonely selves, cooking up ideas and gizmos that will make them and others millionaires and benefit the rest of us in the bargain.  But being a nerd isn't all it takes—you have to want to make fools of the complacent powers that be, and succeed at it, too.

Sources:  Martin L. Tupy's article "Disagreeability, Mother of Invention," appeared on pp. 18-20 of the Nov. 16, 2020 print edition of National Review.  The 1984 Apple commercial, which everyone ought to see at least once, can be viewed at

Monday, November 02, 2020

Asteroid Dust, Anyone?


Last Thursday, Oct. 30, the NASA spacecraft Osiris-Rex sampled about four pounds (2 kilograms) of material from an asteroid named Bennu.  If the rest of the mission goes as well as it has so far, in October of 2023 the sample container will land in the Utah desert, bringing the largest amount in history of asteroid material to earth.  Japanese space probes have previously succeeded in sampling pieces of asteroids, but never as much material as we will get from Bennu.


In the nature of such projects, planning probably began more than a decade ago.  This is the type of project that scientists devote entire careers to, and I'm sure that dozens or hundreds of people have been working on it for most of the twenty-first century so far.  The spacecraft was launched from Cape Canaveral in September of 2016, and spent about two years catching up to Bennu, whose orbit lies partly inside that of the Earth.  In fact, one reason Bennu was chosen as a target is that there is about a 1 in 2700 chance that it will collide with Earth some time in the next ten years.  Bennu is a small asteroid by asteroid standards, only about 490 meters (1600 feet) across.  But it's big enough to do plenty of damage if it entered our atmosphere.  An old rule from combat is "know your enemy," so if we find ourselves scrambling to avoid Bennu's wrath and want to do something about it, knowing what it's made of will help.


Once the spacecraft reached the asteroid, it went into orbit about a mile away.  Even an object as small as Bennu has enough gravity to enable a satellite to orbit in that fashion.  Then, in a horribly tricky 36-hour process, Osiris-Rex crept up to the surface and snatched a four-pound sample and put it in a can to return it to Earth.  And NASA has pictures to prove it.


The entire project, including the launch rocket, cost under $1 billion.  That is chump change compared to the least expensive manned mission the U. S. undertook, Project Mercury back in the early 1960s, which cost about $2 billion in 2020 dollars.  My point is that if you just leave the people at home, you can do extremely impressive things in space for a whole lot less money.


What do we get for that $1 billion?  Well, if you like to put it this way, the world's most expensive dirt, at $250 million a pound.  Space exploration and astronomy seem to be the main beneficiaries these days of what is left of pure-science curiosity.  That is why the U. S. government found the wherewithal and the consistency to fund the Osiris-Rex project from its inception in the early 2000s to its completion sometime in 2023. 


And that is appropriate, because from my worm's-eye view teaching young people who are technically inclined (electrical engineers), many of the best of them seem to seek out space-related jobs.  One of the best electromagnetics students I ever had went to work initially for Boeing, and she is now at Blue Origin, the spaceflight company founded by Amazon founder Jeff Bezos.  And just last week, I was talking with a former grad student of mine who wants to go into aerospace engineering or science to develop space probes. 


In terms of frontiers of exploration, it makes sense to look upward, as there's a lot of room out there and a lot of things to find out.  Every age has what philosopher Richard Weaver calls its "metaphysical dream."  This is not necessarily a religious thing.  But just as most of us need some basic reason to get out of bed in the morning, a society needs something to look forward to and hope for.  Astronomy and especially space exploration, both manned and unmanned, seem to satisfy that need in a way that few other current enterprises do.


While interest and pride in unmanned projects like Osiris-Rex is justified, another point to be made is that if exploration is all you want to do, leaving the people at home is by far the most efficient way to do it.  This argument does not go down well with the space-as-manifest-destiny crowd, who believe that Earth is to space as Europe was to America:  a place we'll simply look back on and say yes, we came from there, but we're glad we're here now. 


The danger in making space the ultimate destiny of mankind is the same danger that any other utopian project brings.  For whatever reason, it seems that if people convince themselves that there is a perfect future out there for them, infinitely better than anything we have now, they begin to justify all manner of wickedness and injustice in the present for the sake of the ideal perfect future that somehow never arrives.  This sort of thing is most easily observed in the history of Marxism, which led to the death of millions on the altar of the perfect workers' paradise that never got here. 


Maybe you think that hoping to colonize other planets or space in general can't cause serious problems here on Earth.  Well, think of it this way.  Marriage is supposed to be a lifelong total commitment of two souls to each other.  But if one of the parties starts thinking, "Well, things are fine now, but if (he, she) gets old and floppy, I can always find somebody else," just the harboring of that thought can cause invisible corrosion of the relationship that can eventually lead to a total rupture.


Once we start looking at Earth not with the eyes of a homeowner, but with the eyes of a renter who has no long-term commitment to the upkeep of the property, you can see what problems might arise.  Everybody involved in Osiris-Rex deserves praise for their persistence, skill, and commitment to a long-term project that could benefit all of humanity, and not just a few space scientists.  By the same token, let's not look on Earth as just a starter apartment, but as the place where humanity has committed to stay and live peaceably and responsibly as long as we can.


Sources:  The Seattle Post-Intelligencer carried the AP article "Asteroid samples tucked into capsule for return to Earth" on Oct. 31, 2020 at  I also referred to Wikipedia articles on Osiris-Rex, Bennu, and Blue Origin.

Monday, October 26, 2020

Is Google Too Big?


On Tuesday, Oct. 20, the U. S. Department of Justice (DOJ) filed a lawsuit against Google Inc. under the provisions of the Sherman Antitrust Act, charging that the firm is a "monopoly gatekeeper for the Internet."  This is the first time the DOJ has used the Act since 1998, when similar charges were filed against Microsoft.  The Microsoft case failed to break up the company, as the DOJ once announced its intentions to do, but reduced the dominance of Microsoft's Explorer browser by opening up the browser arena to more competition.


By one measure, Google has an 87% market share in the search-engine "market."  I put the word in quotes, because nobody I know gives money directly to Google in exchange for permission to use their search engine.  But as the means by which 87% of U. S. internet users look for virtually anything on the Internet, Google has the opportunity to sell ads and user information to advertisers.  A person who Googles is of course benefiting Google, and not Bing or Ecosia or any of the other search engines that you've probably never heard of.


Being first in a network-intensive industry is hugely significant.  When Larry Page and Sergey Brin realized as Stanford undergraduates that matrix algebra could be applied to the search-engine problem in what they called the PageRank algorithm, they immediately started trying it out, and were apparently the first people in the world both to conceive of the idea and to put it into practice.  It was a case of being in the exactly right place (Silicon Valley) at the right time (1996).  A decade earlier, and they would have lapsed into obscurity as the abstruse theorists who came up with a great idea too soon.  And if they had been only a few years later, someone else would have come up with the idea and probably beat them to it.  But as it happened, Google got in the earliest, dominated the infant Internet search-engine market, and has exploded ever since along with the nuclear-bomb-like growth of the WorldWideWeb. 


It's hard to say exactly which one of the classic bad things about monopolies is true of Google. 


The first thing that comes to mind is that classic monopolies can extract highway-robbery prices from customers, as the customers of a monopoly must buy the product or service in question from the monopoly because they have no viable alternative.  Because users typically don't pay directly for Google's services, this argument won't wash.  Google's money comes from advertisers who pay the firm to place ads and inform them who may buy their products, among other things.  (I am no economist and have only the vaguest notions about how Google really makes money, but however they do it, they must be good at it.)  I haven't heard any public howls from advertisers about Google's exploitative prices for ads, and after all, there are other ways to advertise besides Google.  In other words, the advertising market is reasonably price-elastic, in that if Google raised the cost of using their advertising too much, advertisers would start looking elsewhere, such as other search engines or even (gasp!) newspapers.  The dismal state of legacy forms of advertising these days tells me this must not be happening to any great extent.


One other adverse effect of monopolies which isn't that frequently considered is that they tend to stifle innovation.  A good example of this was the reign of the Bell System (affectionately if somewhat cynically called Ma Bell) before the DOJ lawsuit that broke it up into regional firms in the early 1980s.  While Ma Bell could not be faulted for reliability and stability, technological innovation was not its strong suit.  In a decade that saw the invention of integrated circuits, the discovery of the laser, and a man landing on the moon, what was the biggest new technology that Ma Bell offered to the general consumer in the 1960s?  The Princess telephone, a restyled instrument that worked exactly the same as the 1930s model but was available in several designer colors instead of just black or beige.  Give me a break.


Regarding innovation, it's easy to think of several innovative things that Google has offered its users over the years, including something I heard of just the other day. You'll soon be able to whistle or hum a tune to Google and it will try to figure out what the name of the tune is.  This may be Google's equivalent of the Princess telephone, I don't know.  But they're not just sitting on their cash and leaving innovation to others.


In the DOJ's own news release about the lawsuit, they provide a bulleted list that says Google has "entered into agreements with" (a politer phrase than "conspired with") Apple and other hardware companies to prevent installation of search engines other than Google's, and takes the money it makes ("monopoly profits") and buys preferential treatment at search-engine access points. 


So the heart of the matter to the DOJ is the fact that if you wanted to start your own little search-engine business and compete with Google, you'd find yourself walled off from most of the obvious opportunities to do so, because Google has not only got there first, but has made arrangements to stay there as well.


To my mind, this is not so much a David-and-Goliath fight—Goliath being the big company whose name starts with G and David representing the poor exploited consumer—as it is a fight on behalf of other wannabe Googles and firms that are put at a disadvantage by Google's anticompetitive practices.  From Google's point of view, the worst-case scenario would be a breakup, but unless the DOJ decided to regionalize Google in some artificial way, it's hard to see how you'd break up a business whose nature is to be centrally controlled and executed.  Probably what the DOJ will settle for is an opening-up of search-engine installation opportunities to other search-engine companies.  But with $120 billion in cash lying around, Google is well equipped to fight.  This is a battle that's going to last well beyond next month's election, and maybe past the next President's term, whoever that might be. 


Sources:  I referred to articles on the DOJ lawsuit against Google from The Guardian at and, as well as the Department of Justice website at, and the Wikipedia article "United States v. Microsoft Corp." 

Monday, October 19, 2020

Facebook's Dilemma


This week's New Yorker carried an article by Andrew Marantz whose main thrust was that Facebook is not doing a good job of moderating its content.  The result is that all sorts of people and groups that, in the view of many experts the reporter interviewed, should not be able to use the electronic megaphone of Facebook, are allowed to do so.  The list of such offenders is long:  the white-nationalist group Britain First; Jair Bolsonaro, "an autocratic Brazilian politician"; and of course, the President of the United States, Donald Trump. 


Facebook has an estimated 15,000 content moderators working around the world, constantly monitoring what its users post and taking down material that violates what the company calls its Implementation Standards.  Some decisions are easy:  you aren't allowed to post a picture of a baby smoking a cigarette, for example.  But others are harder, especially when the people doing the posting are prominent figures who are likely to generate lots of eye-time and thus advertising revenue for the company. 


The key to the dilemma that Facebook faces was expressed by former content moderator Chris Gray, who wrote a long memo to Facebook CEO Mark Zuckerberg shortly after leaving the company.  He accused Facebook of not being committed to content moderation and said, "There is no leadership, no clear moral compass."


Technology has allowed Facebook to achieve what in principle looks like a very good thing:  in the words of its stated mission, "bring the world closer together."  Unfortunately, when you get closer to some people, you wish you hadn't.  And while Zuckerberg is an unquestioned genius when it comes to extracting billions from a basically simple idea, he and his firm sometimes seem to have an oddly immature notion of human nature.


Author Marantz thinks that Facebook has never had a principled concern about the problem of dangerous content.  Instead, what motivates Facebook to take down posts is not the content itself, but bad publicity about the content.  And indeed, this hypothesis seems to fit the data pretty well.  Although the wacko-extremist group billing itself QAnon has been in the news for months, Facebook allowed its presence up until only last week, when public pressure on the company mounted to an apparently intolerable level. 


Facebook is a global company operating in a bewildering number of cultures, languages, and legal environments.  It may be instructive to imagine a pair of extreme alternatives that Facebook might choose to take instead of its present muddle of Implementation Standards, which makes nobody happy, including the people it bans. 


One alternative is to proclaim itself a common carrier, open to absolutely any content whatsoever, and attempt to hide behind the shelter of Section 230 of the Communications Decency Act of 1996.  That act gives fairly broad protection to social-media companies from being held liable for what users post.  If you had a complaint about what you saw on Facebook under this regime, Facebook would tell you to go sue the person who posted it. 


The problem with this approach is that, unlike a true common carrier like the old Ma Bell, which couldn't be sued for what people happened to say over the telephone network, Facebook makes more money from postings that attract more attention, whether or not the attention is directed at something helpful or harmful.  So no matter how hard they tried to say it wasn't their problem, the world would know that by allowing neo-Nazis, pornographers, QAnon clones, terrorists, and whatever other horrors would come flocking onto an unmoderated Facebook, the company would be profiting thereby.  It is impossible to keep one's publicity skirts clean in such a circumstance.


The other extreme Facebook could try is to drop the pretense of being a common carrier altogether, and start acting like an old-fashioned newspaper, or probably more like thousands of newspapers.  A twentieth-century newspaper had character:  you knew pretty much the general kinds of stuff you would see in it, what point of view it took on a variety of questions, and what range of material you would be likely to see both in the editorial and the advertising sections.  If you didn't like the character one paper presented, you could always buy its competing paper, as up to the 1960s at least, most major metropolitan areas in the U. S. supported at least two dailies. 


The closest thing the old-fashioned newspaper had to what is now Facebook was the letters-to-the-editor section.  Nobody had a "right" to have their letter published.  You sent your letter in, and if the editors decided it was worth publishing, they ran it.  But it was carefully selected for content and mass appeal.  And not just anything got in.


Wait a minute, you say.  Where in the world would Facebook get the dozens of thousands of editors they'd need to pass on absolutely everything that gets published?  Well, I can't answer all your questions, but I will present one exhibit as an example:  Wikipedia.  Here is a high-quality dynamically updated encyclopedia with almost no infrastructure, subsisting on the work of thousands of volunteers.  No, it doesn't make money, but that's not the point.  My point is only that instead of paying a few thousand contract workers to subject themselves to the psychological tortures of the damned in culling out what Zuckerberg doesn't want to show up, go at it from the other end. 


Start by saying that nobody gets to post on Facebook unless one of our editors has passed judgment on it.  When the nutcases and terrorists of the world see their chances of posting dwindling to zero reliably, they'll find some other Internet-based way to cause trouble, never fear.  But Zuckerberg will be able to sleep at night knowing that instead of paying thousands of people to pull weeds all the time, he's started with a nice sterile garden and can plant only the flowers and vegetables he wants to.  And he'd still be able to make money.


The basic problem Facebook faces is that they are trying to be moral with close to zero consensus on what moral is.  At least if the company was divided up into lots of little domains, each with its clearly stated and enforced standards, you would know more or less what to expect when you logged into it, or rather, them. 


Sources:  The article "Explicit Content" by Andrew Marantz appeared on pp. 20-27 of the Oct. 19, 2020 issue of The New Yorker.

Monday, October 12, 2020

Sentiment, Calculation, and Prudence


Some engineers eventually become managers, and managers not only of engineering projects but of entire companies or even public organizations.  The COVID-19 pandemic has thrown a spotlight on the question of how those in charge should allocate scarce resources (including technical resources) in the face of life-threatening situations.  And so I would like to bring you a brief summary sketch of three ways to do that:  two that are widely applied but fundamentally flawed, and one that is not so well known but can actually be applied successfully by ordinary mortals like ourselves.


None of this is original to me, nor to Robert Koons, the philosopher who describes them in a recent issue of First Things.  But originality is not usually a virtue in ethical reasoning, and in what follows, I'll try to show why.


In the 1700s, the Enlightenment thinkers Adam Smith and David Hume devised what Koons calls a "difference-making" way of coming up with moral decisions.  The way this process works is best described by an example.  To properly assess an action or even the lack of an action, you must figure out the net difference it makes to the entire world.  Koons uses the example of a homicidal maniac who, if left to himself, is bound to go out and kill three people.  Suppose you know about this maniac: you can either do nothing, or choose to kill him.  If you do nothing, three people die; if you kill him, only one person dies.  Other things being equal, the world is a better place if fewer people die, so the logic of difference-making says you must kill him.


That's an extreme example, but it vividly illustrates the rational basis of two popular ways of making moral decisions involving public health.  Let's start with the commonly-heard statement that every human life is of infinite value.  Few would dare to argue openly with that contention, yet if you try to use it as a guide for practical action, you run into a dilemma.  Even something as simple as your driving a car to the grocery store exposes other people to some low but nonzero chance of being killed by your vehicle.  If you take the infinite value of human life seriously, you will never drive anywhere, because infinity times (whatever small chance there is of running over someone fatally) is still infinity.


Koons calls one way of dealing with this dilemma "sentimentalism."  He's not talking about people who watch mushy movies, but the fact that the sentimentalist, in the meaning he uses, abandons logic for emotion and settles for life more or less as it is, but feels bad whenever anybody dies.  Such people exist in a constant state of deploring the world's failure to live up to the ideal that every human life is of infinite worth, but otherwise derive little moral guidance from that principle in practice.


The more hard-headed among us say, "look, we can't act on infinities, so if we put a finite but large value on human life, at least we can get somewhere. " Applying the difference-making idea to human lives valued at, say, a million dollars, allows you to make calculations and cost-benefit tradeoffs.  Engineers are familiar with technical tradeoffs, so many engineers find this method of moral decision-making quite attractive.  But one of many problems with this approach is that it requires one to take a "view from nowhere":  there are no boundaries to the differences a given decision makes, other than the world itself.  Again, if we try to be truly logically consistent, calculating all the differences a given life-or-death decision makes is practically impossible.


At this point Koons calls Aristotle and St. Thomas Aquinas to the rescue.  Operating under the umbrella of the classical virtue called prudence, Koons asks a given person in a given specific set of circumstances to judge the worthiness of a particular choice facing him or her.  He sets out four things that make a human act of choice worthy:  (1)  whether the human is applying rational thinking to the act, rather than random chance or instinct; (2) what the essential nature of the act is; (3)  what the purpose or end of the act is; and (4) what circumstances are relevant to the act. 


Unlike the difference-making approach, which imposes the impossible burden of near-omniscience on the decider, judging the worthiness of an action doesn't ask the person making the decision to know everything.  You simply take what you know about yourself, the kind of act you're contemplating, what you're trying to accomplish, and any other relevant facts, and then make up your mind.


In this process, some decisions are easy.  Should I kill an innocent person, a child, say?  Item (2) says no, killing innocent people is always wrong. 


Here's another situation Koons uses, but with an example drawn from my personal experience.  You walk outside your building past a bicycle owned by a person you really hate (call him Mr. SOB) and would like to see out of the way.  You notice that someone who hates Mr. SOB even more than you do has quietly disconnected the bike's brake cables, so that unless Mr. SOB checks his brakes before he gets on his bike, he will ride out into the street with no brakes and quite possibly get killed.  If you decide to say or do nothing, you have not committed any explicit act; you have simply refrained from doing anything.  But item (3) says your intentions in refraining were evil ones:  you hope the guy will get killed on his bike.  In this case, not doing anything is a morally wrong act, and you are obliged to warn him of the danger. 


And in less extreme cases, such as when public officials decide how to trade off lockdown restrictions versus spending money on vaccines or public assistance, the same four principles can guide even politicians (!) to make decisions that do not require them to be all-knowing, but do ask them to apply generally accepted moral principles in a practical and judicial way.


Of course, judiciousness and prudence are not evenly distributed virtues, and some people will be better at moral decision-making than others.  But when we look into the fundamental assumptions behind the decision-making process, we see that the difference-making approach has fatal flaws, while the traditional virtue-based approach using prudential judgment can be applied successfully by any individual with a good will and enough intelligence to use it.


Sources:  A much better  explanation of these approaches to moral reasoning can be found in Robert C. Koons's original article "Prudence in the Pandemic" which appeared on pp. 39-45 of the October issue of First Things, and is also accessible online at