Showing posts with label monopoly. Show all posts
Showing posts with label monopoly. Show all posts

Monday, January 23, 2023

Conservative Futurism and the Internet

 

In the Winter 2023 issue of The New Atlantis, lawyer and author John Ehrett points out that the bloom of enthusiasm that greeted the advent of the Internet has now faded from that particular rose.  There is now a consensus that the negative effects of social media in particular, and also the whole economic basis of "free" services that charge by taking time-slices of one's life, may have begun to outweigh the positive effects.  The question is, what to do about it?

 

Rather than simply parrot various policy ideas that are floating around—as he puts them, "prevention of censorship" or "limitation of corporate power"—he begins with the legacy of an almost completely obscure Russian thinker named Nikolai Fyodorovich Fyodorov (that's the way Wikipedia spells him, anyway).  Fyodorov published almost nothing during his lifetime (1829-1903), but he knew or influenced a lot of people who were or became well-known for their writings and discoveries, including Leo Tolstoy and Konstantin Tsiolkovsky, the Russian scientist who discovered many of the founding principles of rocketry. 

 

Ehrett picks up one thread of Fyodorov's thought, which was saturated in a conservative Russian Orthodoxy that viewed God as the eternal constant presence behind the shifting sands of visible experience.  In Ehrett's words, "A conservative futurism must root itself in the principle of eternity, mirroring that divine timelessness where possible."  It's hard to think of a mindset more diametrically opposed to the "move fast and break things" motto of Facebook, and indeed of the entire software-centric Big-Tech world, which seems to be driving us toward a future that changes much too fast for us to get used to.

 

The proposal Ehrett makes that I'd like to examine concerns the Internet infrastructure, broadly defined:  not only the hardware (backbone, wired and wireless networks, etc.) but the software-based hosting, payment transfer systems, and everything else that makes the Internet work the way it does.  By now, the Internet has made a place for itself in modern society that has become well-nigh essential, just as essential as electric power, water, and sewer systems.  The latter three are regarded as public utilities. 

 

A public utility is like a similar category in the transportation regime, a "common carrier," in the sense that anyone with the money to pay for the service must be entitled to the service, regardless of the individual's particular characteristics.  My electric company doesn't inquire into my politics or religion before connecting my service drop.  But politics or religion have been the cause of many discriminatory actions by social-media operators against certain users. 

 

Suppose a miracle occurred, and there was universal agreement that we would henceforth treat the Internet as a public utility.  Local political units—towns, counties, states—get to regulate their public utilities.  So the nature of the Internet services provided in a particular locale might well depend on the sensibilities and inclinations of a certain region.  The Internet as viewed from Pocatello, Idaho, might look very different from the view it would present to a penthouse apartment in Manhattan.  Not better or worse, necessarily—just different.

 

We have gotten so used to the idea that everything on the Internet must necessarily be global—it's even built into the old name "the WorldWideWeb"—that it's hard to get one's mind around the idea of controlling it locally.  But hey—one of the boasts of software developers is that they can make their machines do virtually anything you can imagine, and we can imagine an Internet that is tied to geography, just as construction practices and architecture vary from locale to locale.

 

And that's another thing that Ehrett's conservative futurism would promote:  a renewed emphasis on the physical as opposed to the virtual.  The "slap together today, tear it down tomorrow" attitude that software developers seem to have taken an oath to enact has bled over into other areas of life, notably construction.  More and more parts of the world are beginning to resemble downtown Houston, where they put historic plaques on structures that have endured as long as five years (I exaggerate, but only a little).  Why have past cultures (Venice in the 1400s, for example) created structures whose beauty has endured to the present day, whereas the architecture and building practices of the twenty-first century seem determined to move us all closer to our origins in tent-dwelling nomadic tribes? 

 

All these ideas are good ones, but they rest on the foundation of a worldview that acknowledges the importance of eternity.  Historically, the leaders of a culture have had to embrace such a view for it to have much of an effect on the culture's direction.  For reasons too complex to go into here, we are going through a period in which the notion of eternity is ignored at best, and more likely scorned or mocked.  As philosopher Richard Weaver said in a book title back in 1948, ideas have consequences.  And the underlying beliefs of those who call the shots in the halls where important economic, political, and social decisions are made do not presently harbor ideas that are favorable to conservative futurism.

 

Nevertheless, the idea is out there, and I find it encouraging that Mr. Ehrett seems to be fairly young—he was in Yale Law School as recently as 2016.  If an idea doesn't appeal to young people in large numbers, it doesn't stand a chance.  As old duffers like me pass from the scene, the ideas that will survive are the ones that young people are attracted to.  And they are not bound by old habits of mind that are very hard to break out of. 

 

The obstacles blocking the progress of conservative futurism seem insurmountable.  Imagine the howls of outrage from Silicon Valley if the town councils of a thousand burgs all voted to restrict their Internets in the ways described above.  But there was a time back in the 1970s when Ma Bell seemed like the only possible way to do U. S. telecommunications, and we've managed to overcome that preconceived notion, despite AT&T's struggle to keep what it termed a natural monopoly.  So maybe it can happen.  But if it does, it will be because of the efforts of young people like John Ehrett and his ilk.

 

Sources: "Can There Be a Conservative Futurism?" by John Ehrett appeared on pp. 46-55 of the Winter 2023 issue of The New Atlantis.  I also referred to the Wikipedia article on Nikolai Fyodorovich Fyodorov. 

Monday, October 26, 2020

Is Google Too Big?

 

On Tuesday, Oct. 20, the U. S. Department of Justice (DOJ) filed a lawsuit against Google Inc. under the provisions of the Sherman Antitrust Act, charging that the firm is a "monopoly gatekeeper for the Internet."  This is the first time the DOJ has used the Act since 1998, when similar charges were filed against Microsoft.  The Microsoft case failed to break up the company, as the DOJ once announced its intentions to do, but reduced the dominance of Microsoft's Explorer browser by opening up the browser arena to more competition.

 

By one measure, Google has an 87% market share in the search-engine "market."  I put the word in quotes, because nobody I know gives money directly to Google in exchange for permission to use their search engine.  But as the means by which 87% of U. S. internet users look for virtually anything on the Internet, Google has the opportunity to sell ads and user information to advertisers.  A person who Googles is of course benefiting Google, and not Bing or Ecosia or any of the other search engines that you've probably never heard of.

 

Being first in a network-intensive industry is hugely significant.  When Larry Page and Sergey Brin realized as Stanford undergraduates that matrix algebra could be applied to the search-engine problem in what they called the PageRank algorithm, they immediately started trying it out, and were apparently the first people in the world both to conceive of the idea and to put it into practice.  It was a case of being in the exactly right place (Silicon Valley) at the right time (1996).  A decade earlier, and they would have lapsed into obscurity as the abstruse theorists who came up with a great idea too soon.  And if they had been only a few years later, someone else would have come up with the idea and probably beat them to it.  But as it happened, Google got in the earliest, dominated the infant Internet search-engine market, and has exploded ever since along with the nuclear-bomb-like growth of the WorldWideWeb. 

 

It's hard to say exactly which one of the classic bad things about monopolies is true of Google. 

 

The first thing that comes to mind is that classic monopolies can extract highway-robbery prices from customers, as the customers of a monopoly must buy the product or service in question from the monopoly because they have no viable alternative.  Because users typically don't pay directly for Google's services, this argument won't wash.  Google's money comes from advertisers who pay the firm to place ads and inform them who may buy their products, among other things.  (I am no economist and have only the vaguest notions about how Google really makes money, but however they do it, they must be good at it.)  I haven't heard any public howls from advertisers about Google's exploitative prices for ads, and after all, there are other ways to advertise besides Google.  In other words, the advertising market is reasonably price-elastic, in that if Google raised the cost of using their advertising too much, advertisers would start looking elsewhere, such as other search engines or even (gasp!) newspapers.  The dismal state of legacy forms of advertising these days tells me this must not be happening to any great extent.

 

One other adverse effect of monopolies which isn't that frequently considered is that they tend to stifle innovation.  A good example of this was the reign of the Bell System (affectionately if somewhat cynically called Ma Bell) before the DOJ lawsuit that broke it up into regional firms in the early 1980s.  While Ma Bell could not be faulted for reliability and stability, technological innovation was not its strong suit.  In a decade that saw the invention of integrated circuits, the discovery of the laser, and a man landing on the moon, what was the biggest new technology that Ma Bell offered to the general consumer in the 1960s?  The Princess telephone, a restyled instrument that worked exactly the same as the 1930s model but was available in several designer colors instead of just black or beige.  Give me a break.

 

Regarding innovation, it's easy to think of several innovative things that Google has offered its users over the years, including something I heard of just the other day. You'll soon be able to whistle or hum a tune to Google and it will try to figure out what the name of the tune is.  This may be Google's equivalent of the Princess telephone, I don't know.  But they're not just sitting on their cash and leaving innovation to others.

 

In the DOJ's own news release about the lawsuit, they provide a bulleted list that says Google has "entered into agreements with" (a politer phrase than "conspired with") Apple and other hardware companies to prevent installation of search engines other than Google's, and takes the money it makes ("monopoly profits") and buys preferential treatment at search-engine access points. 

 

So the heart of the matter to the DOJ is the fact that if you wanted to start your own little search-engine business and compete with Google, you'd find yourself walled off from most of the obvious opportunities to do so, because Google has not only got there first, but has made arrangements to stay there as well.

 

To my mind, this is not so much a David-and-Goliath fight—Goliath being the big company whose name starts with G and David representing the poor exploited consumer—as it is a fight on behalf of other wannabe Googles and firms that are put at a disadvantage by Google's anticompetitive practices.  From Google's point of view, the worst-case scenario would be a breakup, but unless the DOJ decided to regionalize Google in some artificial way, it's hard to see how you'd break up a business whose nature is to be centrally controlled and executed.  Probably what the DOJ will settle for is an opening-up of search-engine installation opportunities to other search-engine companies.  But with $120 billion in cash lying around, Google is well equipped to fight.  This is a battle that's going to last well beyond next month's election, and maybe past the next President's term, whoever that might be. 

 

Sources:  I referred to articles on the DOJ lawsuit against Google from The Guardian at https://www.theguardian.com/technology/2020/oct/20/us-justice-department-antitrust-lawsuit-against-google and https://www.theguardian.com/technology/2020/oct/21/google-antitrust-charges-what-is-next, as well as the Department of Justice website at https://www.justice.gov/opa/pr/justice-department-sues-monopolist-google-violating-antitrust-laws, and the Wikipedia article "United States v. Microsoft Corp." 

Monday, May 22, 2017

Your Money Or Your Data: The WannaCry Ransomware Attack


On May 12, thousands of users of Windows computers around the globe suddenly saw a red screen with a big padlock image and a headline that read, "Ooops, your files have been encrypted!"  It turned out to be a ransom note generated by an Internet worm called WannaCry.  The ransom demanded was comparatively small—about US $300—but the attack itself was not.  The most critical damage was caused in Great Britain where many National Health Service computers locked up, causing delays in surgery and preventing access to files containing critical patient data.  Fortunately, someone found a kill switch for the virus and so its spread was halted, but over 200,000 computers were affected in over 100 countries, according to Wikipedia.

No one knows for sure who implemented this attack, although we do know the source of the software that was used:  the U. S. National Security Agency, which developed something called the EternalBlue exploit to spy on computers.  Somehow it got into the wild and was weaponized by a group that may be in North Korea, but no one is sure. 

At this writing, the attack is mostly over except for the cleanup, which is costing millions as backup files are installed or re-created from scratch, if possible.  Experts recommended not paying the ransom, and it's estimated that the perpetrators didn't make much money on the deal, which was payable only in bitcoin, the software currency that is virtually untraceable. 

Writing in the New York Times, editorialist Zeynep Tufekci of the School of Information and Library Science at the University of North Carolina put the blame for the attack on software companies.  She claims that the way upgrades and security patches are done is itself exploitative and does a disservice to customers, who may have good reasons not to upgrade a system.  This was painfully obvious in Great Britain, where their National Health Service was running lots of old Windows XP systems, although the vast majority of the computers affected were running the more recent Windows 7.  Her point was that life-critical systems such as MRI machines and surgery-related instruments are sold as a package, and incautious upgrading can upset the delicate balance that is struck when a Windows system is embedded into a larger piece of technology.  She suggested that companies like Microsoft take some of the $100 billion in cash they are sitting on and spend some of it on free upgrades to customers who would normally have to pay for the privilege.

There is plenty of blame to go around in this situation:  the NSA, the NHS, Microsoft, and ordinary citizens who were too lazy to install patches that they had even paid for.  But such a large-scale failure of what has become by now an essential part of modern technological society raises questions that we have been able to ignore, for the most part, up to now.

When I described a much smaller-scale ransomware attack in this space back in March, I likened it to a foreign military invasion.  That analogy doesn't seem to be too popular right now, but I still think it's valid.  What keeps us from viewing the two cases similarly has to do with the way we've been trained to look at software, and the way software companies have managed to use their substantial monopolistic powers to set up conditions in their favor.

Historically, such monopolistic abuse has come to an end only through vigorous government action to call the monopoly to account.  The U. S. National Transportation Safety Board can conduct investigations and levy penalties on auto companies who violate the rules or behave negligently.  So far, software firms have almost completely avoided any form of government regulation, and the free-marketers among us have pointed to them as an example of how non-intervention by government can benefit an industry. 

Well, yes and no.  People have made a lot of money in the software and related industries—a few people, anyway, because the field is notorious for the huge returns it can give a few dozen employees and entrepreneurs who happen to get a good idea first, implement it, and dominate a new field (think Facebook).  But when you realize that the same companies charge customers over and over again for the ever-required upgrades and security patches (which are often bundled together so you can't keep the software you like without having it get hacked sooner or later), the difference between a software company and an old-fashioned protection racket where a guy flipping a blackjack in his hand comes in your candy store, looks around, and says, "Nice place you got here—a shame if anything should happen to it" becomes hard to distinguish in some ways.

Software performs a valuable service to billions of people, and I'm not calling for a massive takeover of software firms by the government.  And users of software have some responsibility for doing maintenance, assuming that maintenance is of reasonable cost and isn't impossibly hard to do, or leads to situations that make the software less useful.  But when a major disaster like WannaCry can cause such global havoc, it's time to rethink the fundamentals of how software is designed, sold (technically, it's leased, not sold), and maintained.  And like it or not, the U. S. market has a huge influence on these things.

Even the threat of regulation can have a most salutary effect on monopolistic firms, which to avoid government oversight often enter voluntarily into industry-wide agreements to implement reforms rather than let the government take over the job.  It's unlikely that the current chaos going on in Washington is a good environment in which to undertake this task, but there needs to be a coordinated, technically savvy, but also ethically deep conversation among the principals—software firms, major customers, and government regulators—to find a different way of doing security and upgrades, which are inextricably tied together. 

I don't know what the answer is, but companies like Microsoft may have to accept some form of restraint on their activities in exchange for remaining free of the heavy hand of government regulation.  The alternative is that we continue muddling along as we have been while the growth of the Internet of Things (IoT) spreads highly vulnerable gizmos all across the globe, setting us up for a tragedy that will make WannaCry look like a minor hiccup.  And nobody wants that to happen.

Sources:  Zeynep Tufekci's op-ed piece "The World Is Getting Hacked.  Why Don't We Dp More to Stop It?" appeared on the website of the New York Times on May 13, 2017, at https://www.nytimes.com/2017/05/13/opinion/the-world-is-getting-hacked-why-dont-we-do-more-to-stop-it.html.  I also referred to the Wikipedia article "WannaCry ransomware attack."  My blog "Ransomware Comes to the Heartland" appeared on Mar. 27, 2017.