Monday, January 20, 2020

The Value of Personal Data


In some countries, all mineral rights are owned by the government.  In these countries, your family may have owned a plot of ground for generations.  But if the government thinks there's oil under it, they can come in, drill a well in your back yard, and make millions off the oil they find—and not give you a cent.  And it's all legal.

Other countries with different traditions regarding property rights view this situation as unjust.  In the U. S., for example, mineral rights usually vest in the property owner, which is how many otherwise dirt-poor Texans got rich when oil was discovered on their previously worthless land.

What goes for land that you buy should also go for things that you do.  If your actions lead to the creation of something that is of value, it would seem only fair that you should receive the properly evaluated equivalent for that value—in money or other valuable and exchangeable form. 

In Don't Be Evil, journalist Rana Foroohar describes how large tech companies such as Facebook, Amazon, and Google, as well as many smaller ones, collect data from us that is estimated to be worth nearly $200 billion.  When you click on a link, or lately even talk about certain things in the hearing of your personal assistant or your mobile phone, that information is noted, logged, and used to sell advertising and other things that the giant tech companies get real money for.  And as the Internet of Things grows with its ability to track our movements and other actions, this data stream will only get bigger. 

What do you get in exchange for providing the lifeblood of commerce for these firms?  They would say that you receive lots of free stuff in return—free web searches, a free personal Facebook page, free ads for things you may want to buy, and so on.  And this is true.  But it is far from the ideal free-market exchange of value, in which both parties come on a more or less even footing to an agreement after sharing essential information and comparing the potential exchange with any others that they might make with other parties.  

To put this situation in perspective, Faroohar points out that $200 billion is more than the total value of the annual U. S. agricultural output.  In other words, it's as if the large food companies (ConAgra, Tyson Foods, etc.) took everything that U. S. farmers grow, but paid them nothing for it.  Nobody would put up with that, and nobody would keep farming for very long either.

But just living an ordinary life these days means that you constantly do things that produce little bits of valuable data for the likes of Facebook, Google, and Amazon, whether you really mean to or not.  And in a technical sense, it is perfectly legal.  The cadres of tech-company lawyers who write the incomprehensible boilerplate on every software agreement that you lie about reading before being able to use the software make sure of that. 

One of the good outcomes of the otherwise horrific Nuremberg Trials of Nazi war criminals was the development of the Nuremberg Code, which has since been adopted to govern experiments involving human subjects.  One of the core principles of the code is that participants must give informed voluntary consent to being experimented on.  In other words, they must clearly understand the possible consequences of participating in the experiment and be able to say yes or no freely after making an informed decision.

If we regard the entire data-mining exploits of the big tech companies as a large-scale long-term experiment on the public, it is easy to see that we as individuals are at a vast disadvantage compared to the firms that profit from the data we generate.  Withholding our data would be difficult or impossible, especially when we don't even know that we're providing it (e. g. when Alexa or your mobile phone eavesdrops on your conversations).  And we have no idea what consequences will result from our actions.  And I include among these consequences the fact that the rich monopolies represented by the above-named firms get even richer, while in exchange I receive certain services that are convenient, true, but have value that I would be hard put to estimate in dollar terms.  Even if I did, it's doubtful that the value I perceive as getting from these firms would come anywhere close to the money they make off me by mining it.

The fact that I have to go through mental contortions even to think this way shows how deeply disguised the process is.  As an engineer, I'm trained to think of worst-case scenarios, and if I let my imagination wander in that direction with regard to the situation of data mining, I might come up with something like this:  The U. S. economy becomes even more two-tiered, with a very small number of very wealthy people working for or associated with the largest monopolistic tech firms, and everybody else on some kind of government-paid dole to keep them from starving, because most other jobs have vanished.  Research and development dries up here and moves to China, where most future technology developments happen under the firm control of the government there. 

I could go on, but I think I have made sufficiently clear the point that every day, with every click on a site associated with the largest tech firms, we allow them to obtain data that we make, but that they profit from. 

I do not pretend to have a good solution to this problem.  When similar situations arose in the past, such as during the "robber-baron" period of the 1800s when railroads monopolized essential transportation and commodities, the government had to intervene with countervailing forces embodied in things like the Interstate Commerce Commission and antitrust laws.  If the economy, the job market, and society in general is not to be further hollowed out by the activities of the large online tech firms, which are now indisputably having a negative effect on the political viability of our democracy, something needs to be done.  But I'm not sure what. 

Sources:  Rana Faroohar's book Don't Be Evil was published by Random House in 2019. The statistic about the value of data mined from the public being worth an estimated $197.7 billion by 2022 is on p. 25

Monday, January 13, 2020

Death Rode the Rails, Indeed


The prospect of dying in a railroad accident is not something that too many Americans worry about these days.  But it was not ever thus.  In an excellent but little-known book entitled Death Rode the Rails:  American Railroad Accidents and Safety 1828-1965, economic historian Mark Aldrich reveals that in the earliest days of rail travel in the 1840s, passengers were sometimes surprised to see a thin strip of iron thrusting up through the floor of the carriage, threatening to impale them like bits of beef on a barbecue skewer.  Called "snakeheads" by the antebellum press, relatively few people were killed by these accidents, which occurred because some of the earliest railroads used a thin iron strap fastened to a wooden rail to save money, instead of a solid iron rail, and the strap would sometimes come loose from the wood, snaking its way up into the cars.  But the combination of surprise and powerlessness to avoid the accident made it particularly horrifying, and the novelty of rail travel was tarnished in the public mind by this vivid addition it made to the list of ways one could depart this earth.

While Aldrich has plenty of stories about the different ways that passengers, railroad employees, and trespassers on railroad property were injured and killed, his emphasis is on the economics of railroad safety and how economic considerations played a vital role.  He points out that even after wood-and-strap-iron rails were replaced with all-metal rails and many other safety improvements were made, traveling by U. S. rail in 1907 was still 110 times as dangerous as flying in a modern (2006) airliner.  Still, 22 fatalities per billion passenger miles did not mean that you were taking your life in your hands every time you climbed onto a train.

From the railroad companies' point of view, safety was an expense, and like every other expense, they wanted it to pay a return on investment.  Railroads were virtually unregulated by the federal government until the establishment of the Interstate Commerce Commission (ICC) in 1887, and for many years the ICC restricted itself to setting freight rates for interstate commerce.  Some safety ideas, such as the "block signal" system of controlling train movements, rather than sending out paper orders and hoping everyone would synchronize their watches and keep to the schedule, not only reduced accidents but increased traffic flow, leading to greater utilization of existing plant and higher profits.  The railroads liked this kind of safety measure.

On the other hand, in 1922 the ICC ordered all carriers (rail lines) with revenues over $25 million to install automatic train control on at least one passenger line.  The idea of automatic train control, which dates back to the 1800s, is that instead of relying on the engine driver to see a visual block signal and stop the train, the automatic system would directly receive the signal's command and apply the brakes.  The rail companies reluctantly complied, and by 1930 had spent $26 million to install the system on over 15,000 miles of track. 

But as Aldrich shows, automatic train control made essentially no difference in the rail safety record, did not improve productivity, and cost a great deal of money.  During the Great Depression, many carriers asked for and received permission to cut back or remove automatic train control, and the ICC relented.  However, the same technology turned out to be useful for activating signals in the driver's cab (so-called "cab signals"), which have since become a standard safety feature of great help in fog or rain where visibility of the track-side signals is obscured. 

I was unaware of all this when I blogged a few years ago about a "cornfield meet" (head-on collision) between two freight trains in Texas that killed three employees and did millions of dollars of damage.  At the time, the railroads were installing something called Positive Train Control (PTC), which is nothing but an updated electronic form of automatic train control.  So the idea has been around for more than a century, it turns out, and is just now being implemented.  But as Aldrich points out, the accidents in which PTC would have made a difference are a small percentage of all mishaps.

While Aldrich makes a great case that economics was a huge factor in railroad safety, he gives less emphasis to something that continues to drive debates about all kinds of transportation safety today:  public perception.  He does point out that the average citizen has an exaggerated horror of types of death that are grisly and out of one's control, such as the snakehead accidents.  All the statistics in the world will not comfort the lizard part of one's brain that is primally terrified by the prospect of a fiery or gory death inside some machine that you cannot influence.  But other factors, such as speed and convenience, can overcome such fears.  For example, early automobile travel (say around 1920 to 1940) was demonstrably many times as dangerous as rail travel, yet the rail lines lost most of their short-range passenger business to the automobile in that period.  Ah, but the driver of a car has at least the illusion of control, thinking that while accidents may happen to other drivers, his superior skills will enable him to avoid a crash.  Well, maybe, but the statistics said otherwise.

As you would expect, engineers come in for starring roles in Aldrich's saga.  The technical press, including editors of such publications as Railway Age, brought constructive criticism to egregious safety problems and coordinated cooperation among carriers, government institutions, and private and university researchers to bring about notable improvements in safety systems, devices, and training.  This included issues such as the quality of bridge construction.  Early U. S. railroad bridges were built with the "link-and-pin" method, and the failure of even one joint in the structure would make the whole thing fall down, which it often did.  Complex failure modes in steel rails baffled engineers and scientists for decades until a concerted effort involving inventor Elmer Sperry's electrical track inspection system and advances in metallurgy discovered how to prevent them. 

An old friend of mine summed up the goal of engineering ethics with the two-word phrase, "No headlines."  While U. S. railroads are doing pretty well today by that measure, it is the end result of many decades of improvements and safety efforts.  And Mark Aldrich has given us that history in a rewarding and highly readable volume.

Sources:  Death Rode the Rails (Johns Hopkins Univ. Press, 2006), by Mark Aldrich, is the source for most of my material.  I also drew on the following website for additional details about "snakeheads":  https://aaronwmarrs.com/blog/2012/02/snakeheads-on-antebellum-railroads.html.  My blog about the head-on collision in Texas is at https://engineeringethicsblog.blogspot.com/2018/08/some-answers-about-panhandle-cornfield.html.

Monday, January 06, 2020

Are Self-Driving Cars More Dangerous?


Around midnight Sunday, Dec. 29, 2019, the driver of a Honda Civic headed northbound on Vermont Avenue in Gardena, California was making a left turn from Vermont onto Artesia Boulevard.  The traffic light at the intersection was red for westbound traffic on Artesia, and the intersection happened to be the western end of the Gardena Freeway, where it becomes a surface road.  At that moment, a Tesla Model S zoomed off the freeway westbound through the red light and crashed into the Honda, killing its two occupants.  The Tesla driver and his passenger were not seriously injured.  Early news reports failed to indicate whether the Tesla was on autopilot, but National Highway Traffic Safety Administration (NHTSA) officials are investigating the crash to determine whether the autopilot was engaged.

This latest fatality involving an autopilot-equipped Tesla inspired an Associated Press review of recent fatalities involving Tesla cars in which the autopilot was engaged.  The curious reader can view the website www.tesladeaths.com, where someone has attempted to compile a complete list of worldwide statistics for fatal crashes involving Tesla cars.  As of the end of 2019, the list totaled 110 deaths, of which only 4 are in the category of "verified Tesla autopilot death."  As well over 200,000 Teslas have been sold, these statistics are not particularly remarkable, except for the fact that Tesla purports to be the leading edge of the automotive future.  As such, it deserves closer scrutiny, and that is what it's getting.

The problem in answering the question of our headline is, "more dangerous than what?"  Not only is Tesla the world's best-selling plug-in passenger car, it offers what many regard as the most sophisticated commercially available autopilot system as well.  And in contrast to the more conservative approach many automakers have taken in adding self-driving features such as lane following and automatic braking, the Tesla driver can turn on autopilot and let go of the wheel.  Such behavior is not advised by Tesla, but since when have instruction manuals been 100% effective in keeping people from doing stupid things?

The Associated Press article quotes Jason Levine, head of the nonprofit Center for Auto Safety in Washington, as saying, “At some point, the question becomes: How much evidence is needed to determine that the way this technology is being used is unsafe?”  Levine criticized the NHTSA for dragging its feet instead of issuing regulations as to how Tesla's autopilot feature can be used.  Simply warning the driver that he or she should be alert at all times when the autopilot is working doesn't make it happen.  At least two fatal U. S. crashes (one in Florida and one in Ohio) happened when the autopilot's sensors became confused and failed to recognize a large truck blocking the roadway.  Presumably, if the drivers had been paying attention, they might have seen the truck and stopped.

Promoters of the autonomous-vehicle future face two distinct but interrelated obstacles that could delay or even prevent widespread adoption of self-driving cars.

The first obstacle is technology.  The Society of Automotive Engineers (SAE) has defined five levels of automated driving.  Level 0 is a 1955 Plymouth—completely manual operation—and Level 5 would be the equivalent of an electronic chauffeur—the passenger can watch TV, sleep, or do anything else you would do if you knew a trusted and competent human driver was in charge.  No automaker currently offers a Level 5 vehicle for sale, but Elon Musk is claiming that early this year, Tesla will start to sell fully self-driving cars, which sounds like Level 5 to me.  Of course, this may be nothing but vaporware.  But unless some pretty radical improvements are made in autopilot technology, it's inevitable that some fatal crashes will happen in which the autopilot was engaged.

And here's where we run into the second obstacle:  public perception, including the perception of government regulators, lawmakers, and their constituents.  I don't know about you, but I would feel a lot worse thinking about dying in a car wreck in which my car was driving itself, rather than dying in one where I was actively at the wheel.  True, I'd be just as dead in either case, but there's something about the hope that one can make a difference if one is trying to control the situation.  This is not a completely rational state of mind, but carmakers learned decades ago to appeal to the sub-rational "lizard brain" of the consumer.  Why else do pickup ads show their products bounding over rugged mountains and doing extreme feats that 99% of drivers will never have to do? 

The budding autonomous-car industry is still treading on very shaky ground, at least in the U. S., where the majority of fatal accidents involving Teslas have occurred.  As the statistics show, less than 10% of fatal crashes involving Teslas are associated with the use of the autopilot.  But statistics do not count for much in public perception, and Elon Musk's cowboy-style reputation lends credibility to the accusation that he and his company are playing games with the safety of their customers, and by implication, with the safety of anyone within collision range of a Tesla.

If properly designed and deployed, I concede that autonomous vehicles could lower the rate of traffic fatalities while lessening traffic congestion and doing other good things such as reducing carbon emissions as well.  But there is a world of challenges in that "if."  There may be unknown factors that no one will discover until there is a certain critical mass of autonomous vehicles on the road already.

In statistical mechanics involving solutions of, say, salt in water, you can apply simple rules to very dilute solutions, so dilute that each salt molecule or atom can be treated like it is the only one in the solution.  So far, autonomous vehicles are so rare that each one is surrounded by a sea of non-autonomous vehicles, and the software probably operates under that assumption.

But when you put enough salt in the water so that each salt atom gets within shouting distance of another salt atom, things get complicated.  New effects such as saturation and crystallization occur, and your analysis has to be more sophisticated in order to deal with these effects. 

If autonomous vehicles, especially those made by different manufacturers, ever become common enough so that one vehicle can "see" another one in a typical driving situation, it is very likely that novel and perhaps hazardous effects will occur that even the designers may not have anticipated.  But that will never happen if the public gets so fearful of accidents involving autopiloted cars that they are regulated out of existence.  I hope that doesn't happen either, but if Musk and Tesla get too careless, they might end up triggering just such a reaction.

Sources:  The Associated Press article I referred to appeared in many locations, among which was the San Jose Mercury-News website at https://www.mercurynews.com/2020/01/03/3-crashes-3-deaths-raise-questions-about-teslas-autopilot/.  I also referred to the same site for information on the Gardena crash at
 https://www.mercurynews.com/2020/01/02/fatal-tesla-crash-in-california-investigated-by-feds/.  The Tesla fatal-crash statistics website is www.tesladeaths.com, and I also referred to the Wikipedia article on Tesla, Inc.

Monday, December 30, 2019

Boeing Chief Fired Over 737 Max Controversy


On Sunday, Dec. 22, members of the board of directors of Boeing held a conference call and decided to fire Boeing CEO Dennis Muilenburg.  Since the grounding of the company's 737 Max jetliners last spring after two crashes that killed over 300 people, Muilenberg has faced increasing criticism.  At issue is the jetliner's Maneuvering Characteristic Augmentation System (MCAS), a software patch that was intended to make the 737 Max fly more like its predecessor airframes, which date back to the 1960s.  But in documents released last October, Boeing's former chief test pilot Mark Forkner wrote in an email as long ago as 2016 about "egregious" behavior of the MCAS in flight-simulator tests.

Leaders in an engineering-intensive industry face constant conflicting pressures.  On the one hand, there is the need to make a profit so that your organization can continue its existence and benefit the public in some way with its products and services.  On the other hand, demands for resources to ensure safety and reliability of those products and services cost money, and the trick is to strike a balance between excessive engineering that runs profits into the ground, and skimping on due diligence that leads to shoddy products.  Not being qualified to run a lemonade stand myself, I have nothing but admiration for executives who manage this balancing act, and until recently, Dennis Muilenburg was apparently doing it well enough for the Boeing board of directors to keep him on.

But no longer.  After the fatal 737 Max crashes in Malaysia and Ethiopia were shown to be due to unexpected actions of the MCAS, both the U. S. Federal Aviation Administration (FAA) and eventually the U. S. Congress began investigations into the development of the aircraft and the reasons why MCAS was designed in the first place.  As we mentioned in an earlier blog, a series of physical design changes involving bigger engines made the 737 MAX airframe behave very differently than its predecessors.  According to Gregory Travis, a software engineer and pilot who examined the issue, the right thing to do at this point was for Boeing to undertake a complete mechanical redesign of the aircraft, which would have been very costly in terms of both time and money.  Instead, Boeing chose to create a software patch—MCAS—that sought to make the plane handle more like it used to handle.

The problem was that under some combination of instrument failures, MCAS drew the wrong conclusions about what was going on with the plane, and took over the flight controls from the pilots in a way that was both startling and extremely difficult to overcome.  The Malaysian and Ethiopian crews were not able to do this, and their planes crashed. 

At first, Boeing blamed inadequate pilot training for the crashes, but as the firm has released more internal documents in response to Congressional inquiries and FAA requests, it's beginning to look like at least some people inside Boeing had grave doubts about the viability of the MCAS for safe flying.  Although the public has not yet obtained access to most of these documents, some emails released in October reveal that back in 2016, test pilot Mark Forkner had doubts about the MCAS even when it was only incorporated into the controls of a flight simulator.  The U. S. House committee familiar with the documents says that "the records appear to point to a very disturbing picture of both concerns expressed by Boeing employees about the company’s commitment to safety and efforts by some employees to ensure Boeing’s production plans were not diverted by regulators or others."

An organization's culture is one of the hardest things to describe, but it can be one of its most important assets, or just as easily a liability.  In the quasi-military structure of most commercial firms, leadership sets the overall tone of a culture, but it's a constant struggle to maintain that tone throughout all parts of the organization. 

"Transparency" is a word that shows up a lot when a firm like Boeing appears to have been concealing information that might have made it look bad, or caused regulatory problems and delays in production.  Obviously, transparency is a relative goal.  No firm in a competitive market can afford to be completely transparent about its plans and specialized technologies.  At various times, engineering-intensive companies have tried this in the form of technical newsletters, in which their engineers bragged about their latest developments in enough detail to allow competitors to copy and improve upon them.  Needless to say, such newsletters are found today only in the dusty shelves of libraries that keep material from defunct companies, such as General Radio and the original incarnation of Hewlett-Packard. 

But transparency is a necessity when it comes to issues that affect safety.  On an individual level, the moment you feel a need to hide something you're doing, this can serve you as an alarm to lead you to question why you're hiding it.  But in an organization in which the immediate pressures tend to be in favor of shipping products and minimizing any issues that would stand in the way of that goal, it's easy to simply not say something you ought to say, or not deliver the bad news that will disrupt the schedule that marketing wants to keep. 

The buck stops at the CEO's office, and in firing Muilenburg, Boeing's board of directors has acknowledged that the company's culture has to change from the top down.  Whether a new leader can take the company back to a point where its 737 MAX jetliners can be flown safely again is still very much an open question, however.  Scrapping them or recalling them for a major mechanical redesign would probably spell an end to Boeing as a commercial-aircraft firm, leaving the field to Airbus.  But it's hard to see how anyone is going to have a great deal of confidence in a fix that is mainly software, which is how the 737 MAX got into this mess in the first place. 

Monday, December 23, 2019

Safe People or Safe Systems? The Ring Security Breach


On Wednesday, December 4, eight-year-old Alyssa LeMay heard the sound of Tiny Tim singing "Tiptoe Through the Tulips" coming from her bedroom upstairs in her home in Mississippi.  As she walked into the room, the music stopped and she heard a voice say, "Hello there."  As she looked around the room to see where the voice was coming from, it called her a racial slur which was neither acceptable nor accurate, claimed that it was the voice of Santa Claus, and told her to start misbehaving by, for example, breaking her TV.

Having more sense than to listen to such temptations, she went downstairs and told her father, "Someone's being weird upstairs."  He discovered that a Ring security camera that the family had bought during a Black Friday after-Thanksgiving sale had been taken over by someone who obviously wasn't supposed to be able to do that. 

The LeMays eventually contacted the Washington Post, whose story on the episode was republished widely.  When the LeMays called Ring to complain, they were told basically that the breach was their fault.  Ring determined that the bad actor had obtained the LeMay's username and password from another site and used them to hack into Alyssa's bedroom.  Ring castigated the LeMays for not using the two-step authorization method that Ring recommends.  In a statement published on Ring's website, the company said "we have investigated this incident and have no evidence of an unauthorized intrusion or compromise of Ring’s systems or network."

Let's step back a moment and parse that statement.  What Ring means by unauthorized, and what the LeMays mean by unauthorized, appear to be two different things.  Only an authority, an entity or person capable of authorizing someone, can really authorize an intrusion or compromise.  For that matter, saying "unauthorized intrusion" is like saying "impermissible burglary."  I'm not aware of any kind of burglary that is permissible, or an intrusion that is authorized.  But the point is that the LeMays were by any reasonable standard, the only people who are logically empowered to authorize access to the camera, microphone, and speaker in their daughter Alyssa's bedroom.  They did not authorize the criminal who gained access to the Ring device, and therefore, by this reasonable, common-sense definition of "authorized," there was unauthorized access.

Now look at it from Ring's point of view, which by implication is Amazon's point of view, as Amazon owns Ring.  Think like a software lawyer for a minute.  When we sell a product to a consumer, we have to make sure that the consumer has enough information to avoid problems with the product.  We as lawyers observe the legal fiction that every one of our customers always reads all the fine print and boilerplate that comes with all our products, including the stuff about installing two-step verification for passwords, using strong passwords, and so on.  If we actually made the product so that it wouldn't work unless the user really took all these complicated measures, very few people except computer nerds and lawyers would buy it, so we make it so it will work even if you leave your username as "1234" and your password as "password."  But if the user is so negligent, stupid, (fill in your favorite lawyerly pejorative adjective here) as to not take the recommended precautions, well, too bad.  We've done our lawyerly job, and if anything goes wrong it's on the consumer's head.  To us, "unauthorized" means that somebody hacked into our system and was able to access a device that even the most computer-savvy consumer installed with all the security bells and whistles.  And that didn't happen here, so we are blameless.  Legally speaking.

There is a progression in the safety and security of innovative technologies that often follows a well-known pattern.  At first, a new technology requires the users to learn lots of detailed precautions that must be followed to avoid injury or other types of harm.  But as the technology becomes more widespread and lesser-trained people use it, the harms that can come from uneducated users sometimes happen more often, so often that the very existence and continued use of the technology is threatened.  Only then will the technology's designers step back and ask themselves, "How can we make this really foolproof, so that someone who knows next to nothing about it can nevertheless use it safely?"  At that point, engineers begin to design safety into the technology itself.  It may cost a little more, but the improvement in safety when used by untrained personnel is usually worth it.

This pattern happened with railroading, it happened with automobiles, and in some ways it's happened with computer and information technology.  But not nearly enough, as Alyssa's story shows.  In consumer electronics, where ease of use and cheapness are two paramount requirements, security often becomes an afterthought.  A non-technically-trained user who simply wants to be able to check on his or her daughter with a camera should not be expected to do anything that isn't strictly necessary to set up the system.  The two-step verification security precaution obviously wasn't necessary for the camera to work, so the LeMays didn't do it.  And by reusing passwords—an unfortunate but understandable practice in these days of seventeen gazillion passwords that all our devices and services demand of us—they created a situation in which some hacker stole their credentials and used them to access the Ring device in Alyssa's room.

Ring wants their consumers to be safe people—people who don't reuse passwords and who read enough of the fine print in the online instructions to go the extra mile and install extra, though non-necessary, security precautions.  But people, by and large, want safe systems—systems that simply will not work unless they are set up with sufficient security to begin with.  And history shows that the systems and technologies that survive beyond a highly trained niche market are usually safe systems—systems that anybody off the street can get running with a minimum of effort without running the risk of endangering himself, herself, or one's family members. 

Sources:  The Austin American-Statesman carried the Washington Post's article "Camera in child's room hacked, 8-year-old harassed" on pp. E3-E4 of their Dec. 15, 2019 edition.  The statement from Ring concerning this incident can be found at https://blog.ring.com/2019/12/12/rings-services-have-not-been-compromised-heres-what-you-need-to-know/.