Showing posts with label Microsoft. Show all posts
Showing posts with label Microsoft. Show all posts

Monday, July 22, 2024

CrowdStrike Violates "No Headlines" Rule

 

An old friend of mine summarized engineering ethics for me once in two words:  "No headlines."  Meaning, I suppose, that if an engineering firm does its job right, there is no reason for it to show up in news headlines, which tend to focus on bad news. 

 

Well, the cybersecurity firm CrowdStrike, based just up the road from me in Austin, Texas, managed to break that rule spectacularly last Friday, July 19, when they issued what was supposed to be a routine "sensor configuration update." 

 

CrowdStrike makes cloud-based software that helps prevent cyberattacks and other security breaches, and one part of doing that involves sensing attacks.  Because the nature of cyberattacks changes daily, security software firms such as CrowdStrike have to update their software constantly, and so that includes updating the sensor parts too.  It's not clear to me whether it gets installed by IT departments or individuals, but I would suspect the former.  Its product that was involved in the update last Friday, called Falcon, is used exclusively on Microsoft Windows machines, of which there are about 1.4 billion in the world.

 

Something was radically wrong with the update sent out near 11 PM Austin time, because on about 8 million PCs, a logic error in the update caused them to freeze up and exhibit the famed "blue screen of death" (BSOD).  One way I tell my students that they can assess the relative importance of a given technology, is to imagine that an evil genie waves a magic wand at midnight, and suddenly all examples of that technology throughout the world vanish.  How big would the disruption be? 

 

Well, something like that happened Friday, and the disruptions made a ton of headlines.  Most major U. S. airlines suddenly found themselves without a scheduling or ticketing system.  Schools and hospitals across the U. S. were deprived of their computer systems.  Some 911 emergency-call systems in some cities crashed. 

 

CrowdStrike CEO and co-founder George Kurtz issued an apology Saturday, saying on a blog post "I want to sincerely apologize directly to all of you for today's outage."  Once their engineers discovered what was going on, CrowdStrike rushed to provide a fix, which involved rebooting the paralyzed PCs into safe mode, deleting a certain file, and rebooting.  But multiply that fairly simple task, which could be done easily by an IT tech and with difficulty by anyone else, times 8.5 million PCs, and it was clear that this mess wasn't going to be cleaned up overnight.

 

As computer foulups go, this one was fairly minor, unless you were trying to get somewhere by plane over the weekend.  I don't know for sure, but it's possible it could have been avoided if CrowdStrike had a policy of trying out each of their updates on a garden-variety PC to make sure it works.  Maybe they did, and there's some subtle difference between their test bed and the 8.5 million PCs that froze up.  That's for them to figure out, assuming that they weather the storm of lawsuits that may arise on the horizon once the accountants of affected organizations figure out how much revenue was lost in the flight delays, scheduling problems, and other issues caused by the glitch.

 

The crowning irony of the whole thing was, of course, that the problem was caused by software that was designed to prevent problems.  This isn't the first time that safety equipment turned out to be dangerous.  In the auto industry, a years-long slow-motion tragedy was caused by the carelessness of Takata, a manufacturer of airbag inflators, which sold inflators with a defect that caused them to detonate and send flying metal shrapnel into the car's passengers instead of just inflating the airbag.  After years of recalls, Takata declared bankruptcy in 2017 and is out of business.

 

One hopes that this single screwup will not spell doom for a cybersecurity company that up to now seems to have been doing a good job of preventing computer breaches and otherwise keeping out of trouble.  It's a public corporation with about 8,000 employees, so it's unlikely that giant firms such as American Airlines could recoup their losses without just bankrupting the whole outfit.  If Microsoft itself was directly responsible, that would be another question, but Microsoft's only involvement was the fact that the product was used only on Windows machines. 

 

This whole episode can serve as a cautionary experience to help us prepare for something bigger that might come down the technology pike in the future.  Malicious actors are constantly trying to exploit vulnerabilities for various nefarious purposes, ranging from vandalistic amusement all the way up to strategic military incursions mounted against multiple countries.  It would be worth while to imagine the worst that could happen computer-wise, and then at least ask the question, "What would we do about it?" 

 

My sister works at a large hospital where they have toyed with the idea of deliberately turning off all their computers once every so often, and trying to keep their operations going with paper and phones.  They've never mustered the nerve to do it, partly because there are some things that would be flat impossible to do, and the reduction in service capabilities would be a disservice to the public they have committed to serve. 

 

But for organizations that could manage it, it would be a worthwhile exercise to see if doing without computers for a set time is possible at all, and what would have to change to make it possible if it isn't presently. 

 

In researching this article, I discovered that of those 1.4 billion PCs running Windows out there, about 1 billion of them are still running Windows 10, which is set to go out of business some time in 2025.  I happen to own one of those legacy Windows 10 machines that can't be upgraded to Windows 11 because of some newfangled Windows 11 hardware requirement.  So we can expect another disruption around October of 2025 when Windows 10 support ends.  Let's just hope it isn't as sudden and startling as the CrowdStrike blue screens of death.

 

Sources:  I consulted the articles "Huge Microsoft Outage Caused by CrowdStrike Takes Down Computers Around the World" at https://www.wired.com/story/microsoft-windows-outage-crowdstrike-global-it-probems/, "CrowdStrike discloses new technical details behind outage" at https://www.scmagazine.com/news/crowdstrike-discloses-new-technical-details-behind-outage, https://www.zdnet.com/article/is-windows-10-too-popular-for-its-own-good/

for the statistic about Windows computers, and the Wikipedia article on CrowdStrike.

Monday, February 15, 2021

Major Embarrassment for Microsoft: No Majorana Particle After All

 

In 1937, the Italian physicist Ettore Majorana published a paper predicting the existence of something that came to be known as the Majorana particle.  In the society of subatomic particles, the Majorana is rather standoffish:  without a positive or negative charge, without an antiparticle (technically, it's its own antiparticle) and without even a magnetic or electric dipole moment.  Even the famously neutral neutron has a magnetic dipole moment.  A few months after writing the paper, Majorana sent an enigmatic note to a colleague saying he was sorry for what he was about to do, got on a boat bound from Palermo to Naples, and was never seen again. 

 

Of course, the physics community started looking for Majorana particles right away, and the search intensified after people began trying to make quantum computers.  Theoretically, a quantum computer can perform certain kinds of calculations many orders of magnitude faster than ordinary bit-based computers, because each "qubit" can hold a combination of states and thus process more information in a given amount of time.  (That explanation probably gives physicists a headache, but it's the closest I can get in the space I have.)

 

Anyway, it turns out that if engineers could make Majorana particles, their standoffish nature would become a virtue, because the quantum computers people have devised up to now all suffer from a common problem:  insufficient isolation from the environment.  The quantum states needed to do quantum calculations are very delicate, and any little disturbance from magnetic or electric fields, or just the passage of time, busts up the party so much that extensive error correction and multiple processing of the same problem are necessary.  Theorists say that a quantum computer using Majorana particles would be much less prone to such errors because the particles are so inert, relatively speaking. 

 

So the quantum-computing world was quite impressed back in 2018 when researchers funded by Microsoft announced that they'd finally made a Majorana particle.  The alleged particle wasn't "fundamental" in the sense that it was a single entity.  Rather, they said it was a kind of collective phenomenon created by electron interactions in a cold semiconductor. 

 

There's nothing fishy about that.  Even my EE undergrads learn about positively charged "particles" called holes, which turn out to be a collective effect of electrons in a semiconductor.  But in January of 2021, the same research group published a new paper saying basically, "Oops, we screwed up."  Some critical data tending to falsify the result was omitted from the 2018 paper, which they are withdrawing. 

 

The new paper came about when Sergey Frolov, another physicist, questioned the results of the 2018 paper and obtained their raw data, which included points that were not shown in the 2018 paper. 

 

Leo Kouwenhoven, the leader of the Microsoft research team, released the new paper before peer review along with a note that retracted their earlier paper.  He refused to comment further on the new paper because of peer review, but it's fairly clear what has happened, as described by a recent report in Wired. 

 

Under pressure to deliver results, the Microsoft team omitted a part of their data, allegedly for "esthetic" reasons, and published the 2018 claim to have discovered a Majorana particle.  In retrospect, omitting the esthetically displeasing data was not a good idea.  But they did the right thing in providing Frolov with unpublished as well as published data, and in issuing a new paper showing that they were basically incorrect in their 2018 interpretation of the same data.

 

Physics is hard enough even when the only motivation is intellectual curiosity.  When the auxiliary pressures of continued funding, fame, fortune, or tenure get into the mix, it's tempting to make claims that later can't withstand intense scrutiny.

 

In my own peculiar little field of ball lightning research, I see this quite often.  Ball lightning is an atmospheric phenomenon which thousands of people have seen over the centuries.  There is a consistent set of characteristics which leaves little doubt that there is a real thing there which occurs rarely, but not so rarely that people never see it.  However, there is as yet no generally accepted scientific explanation for ball lightning, and no one has ever been able to produce anything in a lab that shows the most common characteristics of ball lightning.  Worse yet, there are no photographs or videos that are generally accepted as showing an actual ball lightning object.

 

Of course, taking a photo or video or obtaining other kinds of objective data on ball lightning would be a major accomplishment, and many people have claimed to do that over the years.  But subsequent investigations, either by the original researchers or someone else, usually shows that there is a simpler explanation than ball lighting for what was photographed, or just leaves the question unresolved.

 

I don't attribute base motives to people who publish exciting-looking data that later turns out to be not so exciting.  There's always the chance it will prove to be the real thing, and one important reason for publishing scientific data and interpretations is to get it out in the open so others can look at it and criticize it if necessary, just as Forlov did with the Microsoft data.  And while it's embarrassing and can lead to adverse career consequences, admitting that you made a mistake is a part of being an adult, and Kouwenhoven and his group have done the right thing by publishing the later paper and retracting the 2018 one.   

 

Some people would look at this situation and say there's something wrong with the way physics works, but I disagree.  As my wife says in a different context, "More communication is better than less communication."  Let anybody who even thinks they have something worth publishing go ahead and publish it, and let the reviewers and critics have at it as hard as they like, without being mean, of course.  That's the way progress happens.  

 

Allegedly, a person matching Majorana's description was seen in the late 1950s in Valencia, Venezuela.  For several years leading up to his disappearance, Majorana had become increasingly isolated, almost like his eponymous particle.  And while he may have lived through World War II and after in professional silence in Venezuela, he may have ended his life in the waters off the Italian coast on March 25, 1938.  Some things we just can't know for sure yet, and that goes for physics too.

 

Sources:  The report by Tom Simonite "Microsoft's Big Win in Quantum Computing Was an 'Error' After All," appeared on Feb. 12, 2021 at https://www.wired.com/story/microsoft-win-quantum-computing-error/#intcid=_wired-homepage-right-rail_c7864c71-ed27-4bf6-8fd8-91bb939170d2_popular4-1.  I also referred to Wikipedia articles on Ettore Majorana and the Majorana particle. 

Monday, January 04, 2021

The SolarWinds Data Breach: Should We Care?

 

The year 2020 will go down in history for a number of reasons, but the cherry on the disaster cake hit the news in mid-December.  Cybersecurity investigators discovered that some software provided by the Austin, Texas network-monitoring software firm SolarWinds was "trojaned" some time in early 2020.  Hackers, later identified as Russian, managed to insert malware into an update of Solar Winds's popular network-monitoring software, and this allowed the hackers to access customers' emails and other supposedly secure data from around March of 2020 until one of SolarWind's customers noticed that someone had stolen some of their cybersecurity tools, and notified the company.  In similar attacks, Microsoft software was similarly compromised.

 

This was a complicated and well-organized exploit, as the hackers focused their attention on high-value targets such as government agencies.  Wikipedia's article on the breach reads like a list of a spy's dream targets:  the Department of Defense, the National Nuclear Security Administration, the National Institutes of Health (in the midst of the COVID-19 pandemic, yet), the Department of Homeland Security's Cybersecurity and Infrastructure Security Agency, the Department of State, and the Department of the Treasury.  As in any spying operation, most of what they got won't be that useful to them, but some of it very well may be. 

 

Fortunately, the hackers did not use their access to lock files or cause other disruptions that might have drawn premature attention to what they were doing.  They were spying, not sabotaging.  But of course, what they learned may help them commit sabotage in the future.  We simply don't know.

 

How did this happen?  In the case of SolarWinds, the hackers gained access to the firm's "software-publishing infrastructure" way back in October of 2019.  Clearly, the company's own security measures were insufficient to prevent this initial breach, which if caught could have stopped the whole attack in its tracks.  But something as simple as carelessness with passwords can allow hackers into a system.  Hacking is like burglary, in that ordinary defenses stop the average burglar, but if a huge sophisticated gang decides to focus on your house, there's not a lot you can do to stop them.

 

And SolarWinds was the focus of the Russian hacking group known as "Cozy Bear" because of their critical place in the software supply chain.  Thousands of firms use their network-monitoring software, which meant that "trojanizing" a SolarWinds software update gave the hackers potential access to any of SolarWinds's customer's systems.  And that is exactly what happened.

 

Once the breach was discovered last month, SolarWinds went public and warned its customers of the problem.  But as one expert interviewed on the breach put it, fixing the leaks that the hackers established is like getting rid of bed bugs:  sometimes they are so spread out that finding each individual bug is an impossible task, and you have to burn the mattress.  The reason is that once the attackers got into a system, they could wander around and establish more access points.  And stopping the original breach does nothing about those access points, which can be hard to find.  So even though we know how the hackers got in, it's not going to be an easy matter making sure that they can't keep spying on their victims without throwing out a whole lot of software and starting over from scratch.

 

What difference does all this make to the average Joe or Jane?  If you don't work for one of the affected companies or agencies, should you even bother to put this on your already-lengthy worry list? 

 

In itself, the breach's consequences are unpredictable.  Governments keep some things secret for good reasons, mostly, and when those secrets are revealed, bad things can happen.  We are not currently in direct hand-to-hand conflicts with Russia, but there are low-level military operations going on all over the world, many of which the U. S. is involved in without the knowledge of the general public.  As in any military operation, intelligence about plans or proposed actions can be used against you if it leaks, so for one thing, our military forces have been put in a potentially bad situation.  But again, it's hard to tell yet.

 

During World War II, the Germans were largely unaware that the Allies had breached their most-secure code system with the Turing-inspired "bombes" of Bletchley Park, because any military advantage that the Allies' decoding operations gave them was carefully disguised to look like luck.  So we can expect Russia to disguise any advantages it's attained from the Cozy Bear attacks similarly, although we now know roughly what they've been up to. 

 

Institutions change slowly, and the old saying that generals in a new war start out by fighting with the previous war's weapons is still true.  There will always be a need for troops on the ground in some situations, but as more and more commerce and activity of national importance takes place in cyberspace, future battles will also be staged more and more in the digital realm. 

 

As we know from bitter experience in other areas of engineering ethics, it usually takes a spectacular tragedy to inspire major institutional change that could have prevented the tragedy in the first place.  We have been relatively fortunate that bad consequences from cyberattacks on U. S. targets have not approached the magnitude of a 9/11, for example.  Probably the worst ones have been ransomware attacks mounted by apparently private criminal groups that shake down organizations for money, usually in the form of bitcoin.  While serious for the organizations targeted, these sorts of attacks have not up to now appeared to be part of a coordinated terrorist-like systematic assault on the nation's infrastructure.

 

Such an attack could come at any time, however.  And the fact that Cozy Bear hackers were reading the Pentagon's mail for the last nine months does not inspire confidence in the ability of our nation's cyber-warfare personnel to prevent such attacks.  Until we take cyberwarfare fully as seriously, if not more seriously, than attacks with conventional weapons, we are effectively inviting hackers to see what they can do to disrupt life in the United States.  Let's hope they don't try any time soon.

 

Sources:  I referred to an article by Kara Carlson of the USA Today Network which appeared on the Austin  American-Statesman's website on Dec. 30 at https://www.statesman.com/story/business/2020/12/30/solarwinds-breach-could-shape-cybersecurity-future/3999961001/.  I also referred to a chronology of the attacks on the channele2e website at https://www.channele2e.com/technology/security/solarwinds-orion-breach-hacking-incident-timeline-and-updated-details/, and the Wikipedia article "2020 United States federal government data breach."

Monday, October 26, 2020

Is Google Too Big?

 

On Tuesday, Oct. 20, the U. S. Department of Justice (DOJ) filed a lawsuit against Google Inc. under the provisions of the Sherman Antitrust Act, charging that the firm is a "monopoly gatekeeper for the Internet."  This is the first time the DOJ has used the Act since 1998, when similar charges were filed against Microsoft.  The Microsoft case failed to break up the company, as the DOJ once announced its intentions to do, but reduced the dominance of Microsoft's Explorer browser by opening up the browser arena to more competition.

 

By one measure, Google has an 87% market share in the search-engine "market."  I put the word in quotes, because nobody I know gives money directly to Google in exchange for permission to use their search engine.  But as the means by which 87% of U. S. internet users look for virtually anything on the Internet, Google has the opportunity to sell ads and user information to advertisers.  A person who Googles is of course benefiting Google, and not Bing or Ecosia or any of the other search engines that you've probably never heard of.

 

Being first in a network-intensive industry is hugely significant.  When Larry Page and Sergey Brin realized as Stanford undergraduates that matrix algebra could be applied to the search-engine problem in what they called the PageRank algorithm, they immediately started trying it out, and were apparently the first people in the world both to conceive of the idea and to put it into practice.  It was a case of being in the exactly right place (Silicon Valley) at the right time (1996).  A decade earlier, and they would have lapsed into obscurity as the abstruse theorists who came up with a great idea too soon.  And if they had been only a few years later, someone else would have come up with the idea and probably beat them to it.  But as it happened, Google got in the earliest, dominated the infant Internet search-engine market, and has exploded ever since along with the nuclear-bomb-like growth of the WorldWideWeb. 

 

It's hard to say exactly which one of the classic bad things about monopolies is true of Google. 

 

The first thing that comes to mind is that classic monopolies can extract highway-robbery prices from customers, as the customers of a monopoly must buy the product or service in question from the monopoly because they have no viable alternative.  Because users typically don't pay directly for Google's services, this argument won't wash.  Google's money comes from advertisers who pay the firm to place ads and inform them who may buy their products, among other things.  (I am no economist and have only the vaguest notions about how Google really makes money, but however they do it, they must be good at it.)  I haven't heard any public howls from advertisers about Google's exploitative prices for ads, and after all, there are other ways to advertise besides Google.  In other words, the advertising market is reasonably price-elastic, in that if Google raised the cost of using their advertising too much, advertisers would start looking elsewhere, such as other search engines or even (gasp!) newspapers.  The dismal state of legacy forms of advertising these days tells me this must not be happening to any great extent.

 

One other adverse effect of monopolies which isn't that frequently considered is that they tend to stifle innovation.  A good example of this was the reign of the Bell System (affectionately if somewhat cynically called Ma Bell) before the DOJ lawsuit that broke it up into regional firms in the early 1980s.  While Ma Bell could not be faulted for reliability and stability, technological innovation was not its strong suit.  In a decade that saw the invention of integrated circuits, the discovery of the laser, and a man landing on the moon, what was the biggest new technology that Ma Bell offered to the general consumer in the 1960s?  The Princess telephone, a restyled instrument that worked exactly the same as the 1930s model but was available in several designer colors instead of just black or beige.  Give me a break.

 

Regarding innovation, it's easy to think of several innovative things that Google has offered its users over the years, including something I heard of just the other day. You'll soon be able to whistle or hum a tune to Google and it will try to figure out what the name of the tune is.  This may be Google's equivalent of the Princess telephone, I don't know.  But they're not just sitting on their cash and leaving innovation to others.

 

In the DOJ's own news release about the lawsuit, they provide a bulleted list that says Google has "entered into agreements with" (a politer phrase than "conspired with") Apple and other hardware companies to prevent installation of search engines other than Google's, and takes the money it makes ("monopoly profits") and buys preferential treatment at search-engine access points. 

 

So the heart of the matter to the DOJ is the fact that if you wanted to start your own little search-engine business and compete with Google, you'd find yourself walled off from most of the obvious opportunities to do so, because Google has not only got there first, but has made arrangements to stay there as well.

 

To my mind, this is not so much a David-and-Goliath fight—Goliath being the big company whose name starts with G and David representing the poor exploited consumer—as it is a fight on behalf of other wannabe Googles and firms that are put at a disadvantage by Google's anticompetitive practices.  From Google's point of view, the worst-case scenario would be a breakup, but unless the DOJ decided to regionalize Google in some artificial way, it's hard to see how you'd break up a business whose nature is to be centrally controlled and executed.  Probably what the DOJ will settle for is an opening-up of search-engine installation opportunities to other search-engine companies.  But with $120 billion in cash lying around, Google is well equipped to fight.  This is a battle that's going to last well beyond next month's election, and maybe past the next President's term, whoever that might be. 

 

Sources:  I referred to articles on the DOJ lawsuit against Google from The Guardian at https://www.theguardian.com/technology/2020/oct/20/us-justice-department-antitrust-lawsuit-against-google and https://www.theguardian.com/technology/2020/oct/21/google-antitrust-charges-what-is-next, as well as the Department of Justice website at https://www.justice.gov/opa/pr/justice-department-sues-monopolist-google-violating-antitrust-laws, and the Wikipedia article "United States v. Microsoft Corp." 

Monday, February 18, 2019

Microsoft Puts NewsGuard On Duty


At beaches and pools you'll sometimes see a notice that reads "Lifeguard On Duty," or more often, "No Lifeguard On Duty—Swim At Your Own Risk."  Recently Microsoft, originator of the Edge mobile browser, started including a feature in it called NewsGuard.  The user must activate it, but once he or she does, every news site that's been rated by NewsGuard (about 2000 so far) gets either a green checkmark or a red exclamation point.  Green means the site has passed enough of the nine criteria NewsGuard uses to assess credibility and transparency to meet with their approval.  And of course, red means the site flunked.  The example NewsGuard uses of a site that flunks is RT.com, which is operated by Russia but doesn't make that fact exactly obvious. 

The fact that such an influential organization as Microsoft thought it was a good idea to include this third-party app (NewsGuard is an independent operation based in New York City) says something about the anxiety that tech and social media companies feel concerning the issues of fake news, divisiveness, and related matters. 

Reasons for this are not hard to find.  As we learned how Russia tried to influence the 2016 elections with fake social media accounts, we were bombarded with tweets from the Oval Office saying all sorts of things, some of which were actually true.  When Facebook founder Mark Zuckerberg was called before Congress last spring concerning misuse of Facebook data by the research firm Cambridge Analytica, he appeared out of his depth when he was asked about the finer points of free speech and what his firm's responsibilities were with regard to spreading disinformation and falsehoods, as well as selling information on users that could be used in politically suspect ways.

On its own website, NewsGuard boasts that it employs "professional journalists," not algorithms, to evaluate news sites.  These journalists presumably sit around a table and debate whether a given site is hiding its true source of financing, for example (not always an easy thing to determine), or whether the news that shows up on it can be verified by independent and multiple sources.  This is nothing more than good journalism, or what used to be called good journalism.  In an era when the word "viral" means something good, at least when it comes to news, "good" often substitutes for "popular," but there's a big difference.

Here's where the philosopher's distinction between "objective" and "subjective" comes in handy.  We have a sense that objective news is better than subjective news, but there's a problem with that.  As the late Mortimer Adler wrote, "We call something objective when it is the same for me, for you, and for anyone else.  We call something subjective when it differs from one individual to another and when it is exclusively the possession of one individual and of no one else."  By that criterion, there aren't that many objective news reports anywhere.  Pictures of a solar eclipse, maybe—obituaries, at least with regard to the facts about a death.  But maybe the late So-and-So was a nice person to you, but a real SOB to others.  Was he a nice guy or not?  That's subjective, as is most of the news reported by even the most sober and responsible journalists, unless it's C-Span-type relaying of an event without any selection, editing, or other intervention by a third party.

So, saying some news sites are objective and others are subjective wouldn't get us very far.  Instead, NewsGuard falls back on the distinction between truth and falsehood, and relies on sources other than the site itself to reveal falsehood.  But of course, those sources may not get it right either, whatever "right" means.  The upshot of all this is that if you, as a NewsGuard evaluator of a website, find that most people and institutions you trust say that a thing is false or misleading, you're going to decide it's false or misleading, and you'll give that site a red "do not trust" rating.

The fear in some circles is that a liberal or other systematic bias may reveal itself in the ways that NewsGuard rates sites.  And I'm sure that something like this will happen.  Already RT.com has run a story saying that NewsGuard is "controversial."  It's understandable that the site used by NewsGuard itself on its own website as an example of a red-rated source, complains about the red rating. 

The deeper question is whether the NewsGuard feature will make any difference to users.  The hope is that the hapless passive consumer of news, who formerly was suckered into believing all kinds of claptrap, will now see the red rating on his favorite sites and will turn over a new leaf, avoiding places like Breitbart and the Drudge Report and becoming a more enlightened and useful citizen and voter. 

To some, that's a hope.  To others, that's a fear, which is why many news sources whose common characteristics are hard to discern, but may generally be classed as conservative (with exceptions), have expressed concern that the wide availaibility of NewsGuard will lead to some sort of discrimination against them. 

If it's a problem, it's not one that I would personally spend a lot of sleepless nights over.  For one thing, NewsGuard doesn't keep you from viewing a site.  It just tells you that there may be problems with it, and details the problems.  In that sense, it's just a kind of fact-checker or background-provider, and I see no particular harm in that. 

As long as using NewsGuard is voluntary, and as long as its ratings, or something similar, don't acquire the force of compulsion or law and succeed in banning sites altogether, it seems to me that the app can do more good than harm.  Of course, I haven't bothered to check whether they're rating my site, but I doubt that it's one of the top 2000 news sources that NewsGuard has inspected.  We try to tell the truth here, but most readers know this blog mixes opinion with facts.  For those who can't tell the difference, maybe NewsGuard will help.

Sources:  I referred to the NewsGuard website at https://www.newsguardtech.com/ and their nine criteria at https://www.newsguardtech.com/ratings/criteria-for-and-explanation-of-ratings/.  I also viewed the RT story on NewsGuard at https://www.rt.com/news/449530-newsguard-edge-browser-media-integrated/ and the Wikipedia article on NewsGuard.  The quote by Mortimer Adler is from his Ten Philosophical Mistakes (New York:  Collier, 1985), p. 9.

Monday, May 22, 2017

Your Money Or Your Data: The WannaCry Ransomware Attack


On May 12, thousands of users of Windows computers around the globe suddenly saw a red screen with a big padlock image and a headline that read, "Ooops, your files have been encrypted!"  It turned out to be a ransom note generated by an Internet worm called WannaCry.  The ransom demanded was comparatively small—about US $300—but the attack itself was not.  The most critical damage was caused in Great Britain where many National Health Service computers locked up, causing delays in surgery and preventing access to files containing critical patient data.  Fortunately, someone found a kill switch for the virus and so its spread was halted, but over 200,000 computers were affected in over 100 countries, according to Wikipedia.

No one knows for sure who implemented this attack, although we do know the source of the software that was used:  the U. S. National Security Agency, which developed something called the EternalBlue exploit to spy on computers.  Somehow it got into the wild and was weaponized by a group that may be in North Korea, but no one is sure. 

At this writing, the attack is mostly over except for the cleanup, which is costing millions as backup files are installed or re-created from scratch, if possible.  Experts recommended not paying the ransom, and it's estimated that the perpetrators didn't make much money on the deal, which was payable only in bitcoin, the software currency that is virtually untraceable. 

Writing in the New York Times, editorialist Zeynep Tufekci of the School of Information and Library Science at the University of North Carolina put the blame for the attack on software companies.  She claims that the way upgrades and security patches are done is itself exploitative and does a disservice to customers, who may have good reasons not to upgrade a system.  This was painfully obvious in Great Britain, where their National Health Service was running lots of old Windows XP systems, although the vast majority of the computers affected were running the more recent Windows 7.  Her point was that life-critical systems such as MRI machines and surgery-related instruments are sold as a package, and incautious upgrading can upset the delicate balance that is struck when a Windows system is embedded into a larger piece of technology.  She suggested that companies like Microsoft take some of the $100 billion in cash they are sitting on and spend some of it on free upgrades to customers who would normally have to pay for the privilege.

There is plenty of blame to go around in this situation:  the NSA, the NHS, Microsoft, and ordinary citizens who were too lazy to install patches that they had even paid for.  But such a large-scale failure of what has become by now an essential part of modern technological society raises questions that we have been able to ignore, for the most part, up to now.

When I described a much smaller-scale ransomware attack in this space back in March, I likened it to a foreign military invasion.  That analogy doesn't seem to be too popular right now, but I still think it's valid.  What keeps us from viewing the two cases similarly has to do with the way we've been trained to look at software, and the way software companies have managed to use their substantial monopolistic powers to set up conditions in their favor.

Historically, such monopolistic abuse has come to an end only through vigorous government action to call the monopoly to account.  The U. S. National Transportation Safety Board can conduct investigations and levy penalties on auto companies who violate the rules or behave negligently.  So far, software firms have almost completely avoided any form of government regulation, and the free-marketers among us have pointed to them as an example of how non-intervention by government can benefit an industry. 

Well, yes and no.  People have made a lot of money in the software and related industries—a few people, anyway, because the field is notorious for the huge returns it can give a few dozen employees and entrepreneurs who happen to get a good idea first, implement it, and dominate a new field (think Facebook).  But when you realize that the same companies charge customers over and over again for the ever-required upgrades and security patches (which are often bundled together so you can't keep the software you like without having it get hacked sooner or later), the difference between a software company and an old-fashioned protection racket where a guy flipping a blackjack in his hand comes in your candy store, looks around, and says, "Nice place you got here—a shame if anything should happen to it" becomes hard to distinguish in some ways.

Software performs a valuable service to billions of people, and I'm not calling for a massive takeover of software firms by the government.  And users of software have some responsibility for doing maintenance, assuming that maintenance is of reasonable cost and isn't impossibly hard to do, or leads to situations that make the software less useful.  But when a major disaster like WannaCry can cause such global havoc, it's time to rethink the fundamentals of how software is designed, sold (technically, it's leased, not sold), and maintained.  And like it or not, the U. S. market has a huge influence on these things.

Even the threat of regulation can have a most salutary effect on monopolistic firms, which to avoid government oversight often enter voluntarily into industry-wide agreements to implement reforms rather than let the government take over the job.  It's unlikely that the current chaos going on in Washington is a good environment in which to undertake this task, but there needs to be a coordinated, technically savvy, but also ethically deep conversation among the principals—software firms, major customers, and government regulators—to find a different way of doing security and upgrades, which are inextricably tied together. 

I don't know what the answer is, but companies like Microsoft may have to accept some form of restraint on their activities in exchange for remaining free of the heavy hand of government regulation.  The alternative is that we continue muddling along as we have been while the growth of the Internet of Things (IoT) spreads highly vulnerable gizmos all across the globe, setting us up for a tragedy that will make WannaCry look like a minor hiccup.  And nobody wants that to happen.

Sources:  Zeynep Tufekci's op-ed piece "The World Is Getting Hacked.  Why Don't We Dp More to Stop It?" appeared on the website of the New York Times on May 13, 2017, at https://www.nytimes.com/2017/05/13/opinion/the-world-is-getting-hacked-why-dont-we-do-more-to-stop-it.html.  I also referred to the Wikipedia article "WannaCry ransomware attack."  My blog "Ransomware Comes to the Heartland" appeared on Mar. 27, 2017.

Monday, January 27, 2014

Under the Cloud


The business world is almost as fad-ridden as the education world, and one of the hot words in the last few years is "cloud" as in "I'll get it from the cloud," or "We put all our data on the cloud."  In this sense, the word means a set of Internet servers where your important data is archived so that it is accessible from anywhere that has an Internet connection.  The concept is increasingly vital to commercial and institutional users worldwide, and makes sense in that context.  But as Scientific American columnist David Pogue warns in the February issue, Apple and Microsoft are taking not-so-subtle steps to force many individual users of their products onto the cloud.  And I doubt that anyone reading this column can avoid using Apple and Microsoft products without a lot of inconvenience. 

The situation, as I understand it, is basically this:  suppose you have data that needs continual updating on your portable gizmo (which can be an iPad, an iPhone, a BlackBerry, one of those Android things, or you name it), and you'd also like the same version of the same data on your laptop.  In the old days, whenever you made changes on your calendar, for example, you would then physically plug your portable device through a USB cable or whatnot into your laptop and tell it to sync.  That way, your laptop calendar would agree with your handheld thingy's calendar and vice versa, and you wouldn't find yourself at Aunt Mimi's when you were supposed to be having your teeth cleaned.  So far, so good.

Then the number of handheld devices proliferated, and so did their operating systems, and so did the ways you can have laptops and towers talk with portable systems (wireless, IR, Bluetooth, etc.), and at least according to the manufacturers and their unofficial representatives, it just got to be too hard to come up with proprietary software to sync absolutely every portable thingamajig with each operating system for all the popular computers.  So they just said forget it:  the real data will sit on the cloud, where we can keep track of it, and then all we have to do is make sure that every piece of hardware (portable or not) can keep in touch with the cloud.  And that solved the problem. . . .

But if you were used to firing up your old laptop and plugging it into your BlackBerry that you've had since 2003, and you are dead-set against keeping your data in a place that you know not where and you know not when it might go down, you are now out in the cold and under the cloud, so to speak.  According to Mr. Pogue, the latest operating systems from both Apple and Microsoft either don't allow you to do hard-wired transfers without involving the cloud, or make it so hard to do that you almost have to get a networking certificate from Microsoft to know how to do it. A discussion thread on an Apple forum on exactly this topic has been going on since last October, and has accumulated 150 pages of comments.  So there are more than a few people upset about this.

Call me Amish, but it doesn't affect me because my form of a BlackBerry is a three-by-five card.  Or rather, many three-by-five cards.  I suppose if you took all the three-by-five cards I've used in the last decade and piled them up, they would make a stack high enough to fall over and form the kind of mess my desk looks like some days.  In fact, that may be why. . . anyway, somehow I have survived thirty years of an occasionally intense professional life with nothing more advanced than a laptop or two and a mobile phone that you still have to use the numeric keypad for to send a text.  It's so annoying to do it that way that I hardly ever send texts, which is all right by me. 

But seriously, this specific issue is an example of a more general trend that organizations are following: a move toward exerting increasing control of any computer that is connected to one of their networks.  For example, I spend some time at the University of Texas at Austin.  If I was using a University-provided laptop (which I'm not, as it turns out), I would now have to make sure that all the data on it was encrypted in accordance with a University-provided type of encryption software so that if it happens to get stolen, the thieves can't run off with University data.  That makes sense from a liability and security point of view—I have blogged on numerous scandals and crimes that happened when someone took home a laptop full of supposedly secure data—but it represents another intrusion, if you will, into a space that was formerly rather private. 

Of course, if the University owns the laptop, they get to say what you can and can't do with it.  Privately owned computers connected to privately rented networks are another matter, but then you still have to deal with Apple or Microsoft, and their pressure to keep your stuff on the cloud will prove irresistible.  The Star Trek Borg, a race of cybernetic beings, liked to say "resistance is futile," but that was only a TV show.   

Personally, I don't see any real harm in letting Microsoft know the details of my next dental appointment.  And yes, those massive servers go down from time to time, but then so does your laptop.  I admit that I would feel a certain kind of existential queasiness in entrusting the only record of my professional schedule to some ethereal system that is everywhere and nowhere, rather than having it in a tangible, solid form on pieces of paper in my appointment calendar in my briefcase.  (Yes, I do that the old-fashioned way too.)  Maybe people living in the 1850s felt the same way about the newfangled electromagnetic telegrams, and didn't really trust them on an instinctive level as much as they would trust a letter written by the hand of a friend they knew.  But they got used to trusting telegrams, and I suppose we will get used to trusting the cloud, as long as our trust is not abused. 

Sources:  The online version of David Pogue's article "The Curse of the Cloud" can be found at http://www.scientificamerican.com/article/were-forced-to-use-cloud-services-but-at-what-cost/.  I also referred to Wikipedia articles on BlackBerry and Borg (Star Trek).