Monday, October 26, 2020

Is Google Too Big?

 

On Tuesday, Oct. 20, the U. S. Department of Justice (DOJ) filed a lawsuit against Google Inc. under the provisions of the Sherman Antitrust Act, charging that the firm is a "monopoly gatekeeper for the Internet."  This is the first time the DOJ has used the Act since 1998, when similar charges were filed against Microsoft.  The Microsoft case failed to break up the company, as the DOJ once announced its intentions to do, but reduced the dominance of Microsoft's Explorer browser by opening up the browser arena to more competition.

 

By one measure, Google has an 87% market share in the search-engine "market."  I put the word in quotes, because nobody I know gives money directly to Google in exchange for permission to use their search engine.  But as the means by which 87% of U. S. internet users look for virtually anything on the Internet, Google has the opportunity to sell ads and user information to advertisers.  A person who Googles is of course benefiting Google, and not Bing or Ecosia or any of the other search engines that you've probably never heard of.

 

Being first in a network-intensive industry is hugely significant.  When Larry Page and Sergey Brin realized as Stanford undergraduates that matrix algebra could be applied to the search-engine problem in what they called the PageRank algorithm, they immediately started trying it out, and were apparently the first people in the world both to conceive of the idea and to put it into practice.  It was a case of being in the exactly right place (Silicon Valley) at the right time (1996).  A decade earlier, and they would have lapsed into obscurity as the abstruse theorists who came up with a great idea too soon.  And if they had been only a few years later, someone else would have come up with the idea and probably beat them to it.  But as it happened, Google got in the earliest, dominated the infant Internet search-engine market, and has exploded ever since along with the nuclear-bomb-like growth of the WorldWideWeb. 

 

It's hard to say exactly which one of the classic bad things about monopolies is true of Google. 

 

The first thing that comes to mind is that classic monopolies can extract highway-robbery prices from customers, as the customers of a monopoly must buy the product or service in question from the monopoly because they have no viable alternative.  Because users typically don't pay directly for Google's services, this argument won't wash.  Google's money comes from advertisers who pay the firm to place ads and inform them who may buy their products, among other things.  (I am no economist and have only the vaguest notions about how Google really makes money, but however they do it, they must be good at it.)  I haven't heard any public howls from advertisers about Google's exploitative prices for ads, and after all, there are other ways to advertise besides Google.  In other words, the advertising market is reasonably price-elastic, in that if Google raised the cost of using their advertising too much, advertisers would start looking elsewhere, such as other search engines or even (gasp!) newspapers.  The dismal state of legacy forms of advertising these days tells me this must not be happening to any great extent.

 

One other adverse effect of monopolies which isn't that frequently considered is that they tend to stifle innovation.  A good example of this was the reign of the Bell System (affectionately if somewhat cynically called Ma Bell) before the DOJ lawsuit that broke it up into regional firms in the early 1980s.  While Ma Bell could not be faulted for reliability and stability, technological innovation was not its strong suit.  In a decade that saw the invention of integrated circuits, the discovery of the laser, and a man landing on the moon, what was the biggest new technology that Ma Bell offered to the general consumer in the 1960s?  The Princess telephone, a restyled instrument that worked exactly the same as the 1930s model but was available in several designer colors instead of just black or beige.  Give me a break.

 

Regarding innovation, it's easy to think of several innovative things that Google has offered its users over the years, including something I heard of just the other day. You'll soon be able to whistle or hum a tune to Google and it will try to figure out what the name of the tune is.  This may be Google's equivalent of the Princess telephone, I don't know.  But they're not just sitting on their cash and leaving innovation to others.

 

In the DOJ's own news release about the lawsuit, they provide a bulleted list that says Google has "entered into agreements with" (a politer phrase than "conspired with") Apple and other hardware companies to prevent installation of search engines other than Google's, and takes the money it makes ("monopoly profits") and buys preferential treatment at search-engine access points. 

 

So the heart of the matter to the DOJ is the fact that if you wanted to start your own little search-engine business and compete with Google, you'd find yourself walled off from most of the obvious opportunities to do so, because Google has not only got there first, but has made arrangements to stay there as well.

 

To my mind, this is not so much a David-and-Goliath fight—Goliath being the big company whose name starts with G and David representing the poor exploited consumer—as it is a fight on behalf of other wannabe Googles and firms that are put at a disadvantage by Google's anticompetitive practices.  From Google's point of view, the worst-case scenario would be a breakup, but unless the DOJ decided to regionalize Google in some artificial way, it's hard to see how you'd break up a business whose nature is to be centrally controlled and executed.  Probably what the DOJ will settle for is an opening-up of search-engine installation opportunities to other search-engine companies.  But with $120 billion in cash lying around, Google is well equipped to fight.  This is a battle that's going to last well beyond next month's election, and maybe past the next President's term, whoever that might be. 

 

Sources:  I referred to articles on the DOJ lawsuit against Google from The Guardian at https://www.theguardian.com/technology/2020/oct/20/us-justice-department-antitrust-lawsuit-against-google and https://www.theguardian.com/technology/2020/oct/21/google-antitrust-charges-what-is-next, as well as the Department of Justice website at https://www.justice.gov/opa/pr/justice-department-sues-monopolist-google-violating-antitrust-laws, and the Wikipedia article "United States v. Microsoft Corp." 

Monday, October 19, 2020

Facebook's Dilemma

 

This week's New Yorker carried an article by Andrew Marantz whose main thrust was that Facebook is not doing a good job of moderating its content.  The result is that all sorts of people and groups that, in the view of many experts the reporter interviewed, should not be able to use the electronic megaphone of Facebook, are allowed to do so.  The list of such offenders is long:  the white-nationalist group Britain First; Jair Bolsonaro, "an autocratic Brazilian politician"; and of course, the President of the United States, Donald Trump. 

 

Facebook has an estimated 15,000 content moderators working around the world, constantly monitoring what its users post and taking down material that violates what the company calls its Implementation Standards.  Some decisions are easy:  you aren't allowed to post a picture of a baby smoking a cigarette, for example.  But others are harder, especially when the people doing the posting are prominent figures who are likely to generate lots of eye-time and thus advertising revenue for the company. 

 

The key to the dilemma that Facebook faces was expressed by former content moderator Chris Gray, who wrote a long memo to Facebook CEO Mark Zuckerberg shortly after leaving the company.  He accused Facebook of not being committed to content moderation and said, "There is no leadership, no clear moral compass."

 

Technology has allowed Facebook to achieve what in principle looks like a very good thing:  in the words of its stated mission, "bring the world closer together."  Unfortunately, when you get closer to some people, you wish you hadn't.  And while Zuckerberg is an unquestioned genius when it comes to extracting billions from a basically simple idea, he and his firm sometimes seem to have an oddly immature notion of human nature.

 

Author Marantz thinks that Facebook has never had a principled concern about the problem of dangerous content.  Instead, what motivates Facebook to take down posts is not the content itself, but bad publicity about the content.  And indeed, this hypothesis seems to fit the data pretty well.  Although the wacko-extremist group billing itself QAnon has been in the news for months, Facebook allowed its presence up until only last week, when public pressure on the company mounted to an apparently intolerable level. 

 

Facebook is a global company operating in a bewildering number of cultures, languages, and legal environments.  It may be instructive to imagine a pair of extreme alternatives that Facebook might choose to take instead of its present muddle of Implementation Standards, which makes nobody happy, including the people it bans. 

 

One alternative is to proclaim itself a common carrier, open to absolutely any content whatsoever, and attempt to hide behind the shelter of Section 230 of the Communications Decency Act of 1996.  That act gives fairly broad protection to social-media companies from being held liable for what users post.  If you had a complaint about what you saw on Facebook under this regime, Facebook would tell you to go sue the person who posted it. 

 

The problem with this approach is that, unlike a true common carrier like the old Ma Bell, which couldn't be sued for what people happened to say over the telephone network, Facebook makes more money from postings that attract more attention, whether or not the attention is directed at something helpful or harmful.  So no matter how hard they tried to say it wasn't their problem, the world would know that by allowing neo-Nazis, pornographers, QAnon clones, terrorists, and whatever other horrors would come flocking onto an unmoderated Facebook, the company would be profiting thereby.  It is impossible to keep one's publicity skirts clean in such a circumstance.

 

The other extreme Facebook could try is to drop the pretense of being a common carrier altogether, and start acting like an old-fashioned newspaper, or probably more like thousands of newspapers.  A twentieth-century newspaper had character:  you knew pretty much the general kinds of stuff you would see in it, what point of view it took on a variety of questions, and what range of material you would be likely to see both in the editorial and the advertising sections.  If you didn't like the character one paper presented, you could always buy its competing paper, as up to the 1960s at least, most major metropolitan areas in the U. S. supported at least two dailies. 

 

The closest thing the old-fashioned newspaper had to what is now Facebook was the letters-to-the-editor section.  Nobody had a "right" to have their letter published.  You sent your letter in, and if the editors decided it was worth publishing, they ran it.  But it was carefully selected for content and mass appeal.  And not just anything got in.

 

Wait a minute, you say.  Where in the world would Facebook get the dozens of thousands of editors they'd need to pass on absolutely everything that gets published?  Well, I can't answer all your questions, but I will present one exhibit as an example:  Wikipedia.  Here is a high-quality dynamically updated encyclopedia with almost no infrastructure, subsisting on the work of thousands of volunteers.  No, it doesn't make money, but that's not the point.  My point is only that instead of paying a few thousand contract workers to subject themselves to the psychological tortures of the damned in culling out what Zuckerberg doesn't want to show up, go at it from the other end. 

 

Start by saying that nobody gets to post on Facebook unless one of our editors has passed judgment on it.  When the nutcases and terrorists of the world see their chances of posting dwindling to zero reliably, they'll find some other Internet-based way to cause trouble, never fear.  But Zuckerberg will be able to sleep at night knowing that instead of paying thousands of people to pull weeds all the time, he's started with a nice sterile garden and can plant only the flowers and vegetables he wants to.  And he'd still be able to make money.

 

The basic problem Facebook faces is that they are trying to be moral with close to zero consensus on what moral is.  At least if the company was divided up into lots of little domains, each with its clearly stated and enforced standards, you would know more or less what to expect when you logged into it, or rather, them. 

 

Sources:  The article "Explicit Content" by Andrew Marantz appeared on pp. 20-27 of the Oct. 19, 2020 issue of The New Yorker.

Monday, October 12, 2020

Sentiment, Calculation, and Prudence

 

Some engineers eventually become managers, and managers not only of engineering projects but of entire companies or even public organizations.  The COVID-19 pandemic has thrown a spotlight on the question of how those in charge should allocate scarce resources (including technical resources) in the face of life-threatening situations.  And so I would like to bring you a brief summary sketch of three ways to do that:  two that are widely applied but fundamentally flawed, and one that is not so well known but can actually be applied successfully by ordinary mortals like ourselves.

 

None of this is original to me, nor to Robert Koons, the philosopher who describes them in a recent issue of First Things.  But originality is not usually a virtue in ethical reasoning, and in what follows, I'll try to show why.

 

In the 1700s, the Enlightenment thinkers Adam Smith and David Hume devised what Koons calls a "difference-making" way of coming up with moral decisions.  The way this process works is best described by an example.  To properly assess an action or even the lack of an action, you must figure out the net difference it makes to the entire world.  Koons uses the example of a homicidal maniac who, if left to himself, is bound to go out and kill three people.  Suppose you know about this maniac: you can either do nothing, or choose to kill him.  If you do nothing, three people die; if you kill him, only one person dies.  Other things being equal, the world is a better place if fewer people die, so the logic of difference-making says you must kill him.

 

That's an extreme example, but it vividly illustrates the rational basis of two popular ways of making moral decisions involving public health.  Let's start with the commonly-heard statement that every human life is of infinite value.  Few would dare to argue openly with that contention, yet if you try to use it as a guide for practical action, you run into a dilemma.  Even something as simple as your driving a car to the grocery store exposes other people to some low but nonzero chance of being killed by your vehicle.  If you take the infinite value of human life seriously, you will never drive anywhere, because infinity times (whatever small chance there is of running over someone fatally) is still infinity.

 

Koons calls one way of dealing with this dilemma "sentimentalism."  He's not talking about people who watch mushy movies, but the fact that the sentimentalist, in the meaning he uses, abandons logic for emotion and settles for life more or less as it is, but feels bad whenever anybody dies.  Such people exist in a constant state of deploring the world's failure to live up to the ideal that every human life is of infinite worth, but otherwise derive little moral guidance from that principle in practice.

 

The more hard-headed among us say, "look, we can't act on infinities, so if we put a finite but large value on human life, at least we can get somewhere. " Applying the difference-making idea to human lives valued at, say, a million dollars, allows you to make calculations and cost-benefit tradeoffs.  Engineers are familiar with technical tradeoffs, so many engineers find this method of moral decision-making quite attractive.  But one of many problems with this approach is that it requires one to take a "view from nowhere":  there are no boundaries to the differences a given decision makes, other than the world itself.  Again, if we try to be truly logically consistent, calculating all the differences a given life-or-death decision makes is practically impossible.

 

At this point Koons calls Aristotle and St. Thomas Aquinas to the rescue.  Operating under the umbrella of the classical virtue called prudence, Koons asks a given person in a given specific set of circumstances to judge the worthiness of a particular choice facing him or her.  He sets out four things that make a human act of choice worthy:  (1)  whether the human is applying rational thinking to the act, rather than random chance or instinct; (2) what the essential nature of the act is; (3)  what the purpose or end of the act is; and (4) what circumstances are relevant to the act. 

 

Unlike the difference-making approach, which imposes the impossible burden of near-omniscience on the decider, judging the worthiness of an action doesn't ask the person making the decision to know everything.  You simply take what you know about yourself, the kind of act you're contemplating, what you're trying to accomplish, and any other relevant facts, and then make up your mind.

 

In this process, some decisions are easy.  Should I kill an innocent person, a child, say?  Item (2) says no, killing innocent people is always wrong. 

 

Here's another situation Koons uses, but with an example drawn from my personal experience.  You walk outside your building past a bicycle owned by a person you really hate (call him Mr. SOB) and would like to see out of the way.  You notice that someone who hates Mr. SOB even more than you do has quietly disconnected the bike's brake cables, so that unless Mr. SOB checks his brakes before he gets on his bike, he will ride out into the street with no brakes and quite possibly get killed.  If you decide to say or do nothing, you have not committed any explicit act; you have simply refrained from doing anything.  But item (3) says your intentions in refraining were evil ones:  you hope the guy will get killed on his bike.  In this case, not doing anything is a morally wrong act, and you are obliged to warn him of the danger. 

 

And in less extreme cases, such as when public officials decide how to trade off lockdown restrictions versus spending money on vaccines or public assistance, the same four principles can guide even politicians (!) to make decisions that do not require them to be all-knowing, but do ask them to apply generally accepted moral principles in a practical and judicial way.

 

Of course, judiciousness and prudence are not evenly distributed virtues, and some people will be better at moral decision-making than others.  But when we look into the fundamental assumptions behind the decision-making process, we see that the difference-making approach has fatal flaws, while the traditional virtue-based approach using prudential judgment can be applied successfully by any individual with a good will and enough intelligence to use it.

 

Sources:  A much better  explanation of these approaches to moral reasoning can be found in Robert C. Koons's original article "Prudence in the Pandemic" which appeared on pp. 39-45 of the October issue of First Things, and is also accessible online at https://www.firstthings.com/article/2020/10/prudence-in-the-pandemic.

Monday, October 05, 2020

From Vikings to Ransomware Attacks

 

An item in Wired recently pointed out that anybody who facilitates ransomware payments to certain U. S. Treasury-sanctioned actors may also be liable to prosecution because they have violated  Office of Foreign Asset Control (OFAC) regulations, which prohibit such dealings.  This puts ransomware victims in a worse bind than ever:  pay up to free your kidnapped data and get fined by the Treasury, or refuse and do without your data. 

 

Perhaps this is just a backwards way for the Treasury Department to encourage organizations that rely on IT facilities—which is nearly everybody nowadays—to be more vigilant in preventing cyberattacks.  And that's not a bad thing.  But if I worked for the IT services division of a large firm or government agency, I would feel somewhat put upon by the notion that rather than helping me avoid ransomware attackers, the Treasury Department was letting me know that if I get attacked, they'll be standing by to make sure any ransom I pay doesn't go to sanctioned criminals. 

 

The utter permeability of national boundaries to the Internet-mediated WorldWideWeb has led us to ignore some long-standing expectations and categories of thought, and I think we ignore them at our peril.  To see what I mean, let me take you back for a moment to Canterbury, England in the fall of 1011 A. D.  A couple of years earlier, an army of Danish Vikings led by Thorkell the Tall had threatened the city, but the populace raised and paid a 3,000-pound silver ransom, and Thorkell turned instead to points south, leaving Canterbury alone for the time being. 

 

But in 1011, Thorkell attacked Canterbury again, and the Anglo-Saxons decided to fight this time.  After a three-week battle, the Vikings broke through the city's defenses and captured  the Archbishop of Canterbury, who was named Aelfheah, and a number of other high officials.  After burning down Canterbury Cathedral, Thorkell ran off with the Archbishop and demanded another 3,000-pound ransom.

 

But the Archbishop himself let it be known that he didn't want to be ransomed, and didn't want his people to pay up.  After seven months of holding on to Aelfheah hoping for a ransom, some of the Vikings under Thorkell lost patience (the Vikings were not known for that virtue), and began to throw cowbones at Aelfheah, finishing him off with a blow from the blunt end of an axe.  Thorkell, who had tried to stop his men from killing Aelfheah, felt so bad about it that he eventually joined forces with the English king, Aethelred the Unready, and fought bravely in his behalf.

 

What has that got to do with ransomware?  More than you might think. 

 

For one thing, our little history lesson shows that placating kidnappers and other demanders of ransom tends to lead, not to the end of ransom demands, but to their encouragement.  Thorkell may have figured, "Hey, we got 3,000 pounds of silver from Canterbury a couple of years ago, let's go try it again."  So like blackmail payments and similar shady dealings, the payment of ransom for either people or data just encourages the bad actors to keep doing what they're doing, in the long run.

 

Secondly, the people of Canterbury didn't expect Aelfheah to fight off the Vikings all by himself.  They mounted a united defense, and though they failed to stop Thorkell the second time, things could have turned out differently if the balance of power had been more in favor of the Anglo-Saxons.  But they would have had to plan for such an attack and devote resources to preparing their armed forces.

 

Because ransomware attackers don't show up on the streets of U. S. cities armed with tanks and flamethrowers, they escape being placed in the same category as we would place the Vikings in 1011 A. D.:  as invaders bent on pillage and destruction.  But that's what they are.

 

It's true that few if any people have died as a direct result of a ransomware attack.  But the net effect is the same:  an invasion of a sovereign territory by (typically) foreign actors leads to money going into the pockets of the attackers. 

 

In its limited bureaucratic way, the U. S. Treasury is alerting potential victims of ransomware attacks that paying ransom to certain sanctioned organizations can get you in trouble with the government, on top of whatever expenses and problems the attack itself causes.  But it's apparently not the Treasury's job to help you defend yourself against such attacks.

 

At a recent social gathering, I met a youngish man who turns out to be a freelance IT security specialist who goes around trying to attack systems to discover their vulnerabilities, and then informs the client about the weak spots he's found.  I didn't spend enough time talking with him to discover if one of his tricks involves threatening ransomware attacks—it would be hard to try that without actually fouling up a client's systems, which is going a little beyond the remit of a consultant.  But such people are an important part of an overall cybersecurity policy that every organization of any size needs to have.  

 

I wish there was some way the U. S. military could guard our Internet borders the way they guard our physical borders.  But the way the Internet has grown makes that nearly impossible, and probably inadvisable as well.  For whatever reason, IT-intensive organizations have to do the equivalent of paying for their own guards and military defenses against the attacks of cyber-Vikings, rather than relying on the government for security as we do for our physical borders. 

 

But minds and organizations change slowly, which is why there are so many outdated operating systems out there, just begging to be hacked or attacked by ransomware.  Maybe some kind of tax credit for IT security expenditures would make a difference in encouraging organizations (at least private ones) to do a better job of safeguarding their systems so well that most ransomware attacks would fail.  Like anybody else, the attackers go around looking for low-hanging fruit, and I suspect that many ransomware attacks would have been foiled by more vigilant IT security on the part of the victims.

 

The long-term solution, if there is one, is increased vigilance and more resources devoted to IT security, plus a disinclination to pay ransomware attackers.  But as long as there are people out there who would rather raid and invade for pay rather than earn a living in a more peaceful way, we will probably have to deal with ransomware attacks.

 

Sources:  Wired's website carried an item about the U. S. Treasury's warning concerning payments to certain ransomware attackers at https://www.wired.com/story/ransomware-fine-grindr-bug-joker-malware-security-news/.  The Treasury's announcement itself can be viewed at https://home.treasury.gov/system/files/126/ofac_ransomware_advisory_10012020_1.pdf.  And I got the story about Thorkell the Tall and Aelfheah from the Wikipedia article "Siege of Canterbury."