Showing posts with label Internet of things. Show all posts
Showing posts with label Internet of things. Show all posts

Monday, October 02, 2017

Internet Security Isn't Child's Play


Full disclosure:  my wife and I have never had children.  The closest we have come to full-time responsibility for someone younger than 80 was when our ten-year-old nephew came to stay with us for part of the summer of 2013.  So what I have to say about the hazards of buying smart Internet-connected toys for your kids is, from my point of view, entirely hypothetical and untouched by the seasoning of personal experience.  Nevertheless, it's a new kind of problem and those with parental responsibilities need to be aware of it.

For the last several years, one of the biggest trends on the consumer-electronics horizon has been the Internet of Things (IoT).  It's now so cheap to connect tiny, inexpensive devices to increasingly powerful cloud-computing apps on the Internet that companies are falling over each other trying to get their IoT-enabled gizmos to consumers.  And the gold-rush analogy is especially apt for the toy market, which is highly seasonal and driven by novelty even more than the rest of the consumer business. 

When IoT came along, we began to see a flock of toys that connect to the Internet for some of the same reasons devices for adults do:  message sharing, video recording, GPS-enabled location features, and so on.  But when adults use IoT-enabled equipment, there is at least a presumption that they can read instructions and take whatever precautions are needed to keep malign third parties from exploiting the window into your personal life that bringing an IoT-enabled device into your home opens. 

Not so with children.  A recent story in the Washington Post details how the FBI had issued a consumer notice about "smart toys" that connect to the Internet.  Inspired partly by recalls in Europe of a talking doll that a hacker could use as a listening device, the FBI says that parents should be very careful about purchasing or setting up any toy that can connect to the Internet. 

While I'm not aware of any crimes that have been shown to be committed by such means, it's not hard to imagine such a situation.  Organized housebreakers could take a look around your home while little Johnny is dragging his Internet-enabled megatherium through the living room, and use its GPS to find just where that priceless collection of jewels from the court of Louis XIV is kept on display.  Even creepier is the notion that a crook bent upon kidnaping or worse could start talking to your daughter through her doll:  "Yes, I want you to meet a friend of mine.  He's waiting right outside the front door.  Mommy's asleep, isn't she?  Come on outside . . . ."  Sounds like a bad horror film, but the technology is there already.

The FBI's recommendations are not surprising, for the most part:  know whether the toy you're thinking of buying has been reported for problems with security, read the disclosures and privacy policies provided with the toy (if any), monitor your child's activity with the toy, use good password hygiene, don't tell the company any more than you have to when setting up the toy to work through your wireless system, etc.  Some of this advice falls in the wouldn't-it-be-nice category, such as reading disclosure and privacy policies.  First, hire a lawyer to interpret the policy, if it's written like most boiler-plate software agreements.  And while monitoring a child's use of the toy is a good idea, parents can be only one place at a time, and one reason for buying a child toys is so they can amuse themselves and not depend on you to be there fending off boredom for them every second.  Or at least that's the impression I get from a few parents I know.

The hazards of smart toys are just one more chink in the Swiss cheese of what used to be armor that most parents erected around their children.  Here's just one example of that armor from my own childhood, back when men were men and megatheriums roamed the earth. 

My father was a six-foot-two, two-hundred-pound repo man for a few years.  Repossessing cars from uncooperative borrowers is not for the faint of heart, and in a crisis I'm sure he could cuss as well as anybody.  But until I was a teenager, I never heard a swear word pass his lips, even when I drove my tricycle into the ladder he was using to hold a paint can and dumped a gallon of gray oil paint all over his head.  (Well, maybe he did cuss then and I just didn't understand what he was saying.) 

The point is that he went out of his way to create a kind of bubble of innocence or protection around us children.  There were some TV shows we couldn't watch and some magazines we couldn't look at, even back in the halcyon 1960s.  Back then, of course, electronic media had just barely started to infiltrate the home, radio and TV being the only means of entry.  Since both my parents were gone before the Internet really got going, I will never know what their reaction to it would have been.  But suffice it to say I don't think my father's impression of it would have been positive.

Some ages exalt and glorify children, and others like ours seem to treat them as kind of an optional hobby for adults, instead of the seedbed of the next fifty to hundred years of civilization.  Like it or not, children in advanced industrial societies are going to grow up in a world where the Internet of Things is as routine to them as electric lights were to people my age.  The main role of parents as parents is to prepare children to live in the world they will inhabit, and hopefully make it a better place.  But first the children have to survive into adulthood.  And while the chances of anything bad happening to your child as a result of a smart toy is remote, it's one more thing to worry about in the process of raising children.  And at least we've been alerted to this problem before anyone has been harmed, as far as we know. 

Sources:  Elisabeth Leamy's article "The danger of giving your child 'smart toys'" appeared on Sept. 29, 2017 in the online version of the Washington Post at

Monday, May 22, 2017

Your Money Or Your Data: The WannaCry Ransomware Attack


On May 12, thousands of users of Windows computers around the globe suddenly saw a red screen with a big padlock image and a headline that read, "Ooops, your files have been encrypted!"  It turned out to be a ransom note generated by an Internet worm called WannaCry.  The ransom demanded was comparatively small—about US $300—but the attack itself was not.  The most critical damage was caused in Great Britain where many National Health Service computers locked up, causing delays in surgery and preventing access to files containing critical patient data.  Fortunately, someone found a kill switch for the virus and so its spread was halted, but over 200,000 computers were affected in over 100 countries, according to Wikipedia.

No one knows for sure who implemented this attack, although we do know the source of the software that was used:  the U. S. National Security Agency, which developed something called the EternalBlue exploit to spy on computers.  Somehow it got into the wild and was weaponized by a group that may be in North Korea, but no one is sure. 

At this writing, the attack is mostly over except for the cleanup, which is costing millions as backup files are installed or re-created from scratch, if possible.  Experts recommended not paying the ransom, and it's estimated that the perpetrators didn't make much money on the deal, which was payable only in bitcoin, the software currency that is virtually untraceable. 

Writing in the New York Times, editorialist Zeynep Tufekci of the School of Information and Library Science at the University of North Carolina put the blame for the attack on software companies.  She claims that the way upgrades and security patches are done is itself exploitative and does a disservice to customers, who may have good reasons not to upgrade a system.  This was painfully obvious in Great Britain, where their National Health Service was running lots of old Windows XP systems, although the vast majority of the computers affected were running the more recent Windows 7.  Her point was that life-critical systems such as MRI machines and surgery-related instruments are sold as a package, and incautious upgrading can upset the delicate balance that is struck when a Windows system is embedded into a larger piece of technology.  She suggested that companies like Microsoft take some of the $100 billion in cash they are sitting on and spend some of it on free upgrades to customers who would normally have to pay for the privilege.

There is plenty of blame to go around in this situation:  the NSA, the NHS, Microsoft, and ordinary citizens who were too lazy to install patches that they had even paid for.  But such a large-scale failure of what has become by now an essential part of modern technological society raises questions that we have been able to ignore, for the most part, up to now.

When I described a much smaller-scale ransomware attack in this space back in March, I likened it to a foreign military invasion.  That analogy doesn't seem to be too popular right now, but I still think it's valid.  What keeps us from viewing the two cases similarly has to do with the way we've been trained to look at software, and the way software companies have managed to use their substantial monopolistic powers to set up conditions in their favor.

Historically, such monopolistic abuse has come to an end only through vigorous government action to call the monopoly to account.  The U. S. National Transportation Safety Board can conduct investigations and levy penalties on auto companies who violate the rules or behave negligently.  So far, software firms have almost completely avoided any form of government regulation, and the free-marketers among us have pointed to them as an example of how non-intervention by government can benefit an industry. 

Well, yes and no.  People have made a lot of money in the software and related industries—a few people, anyway, because the field is notorious for the huge returns it can give a few dozen employees and entrepreneurs who happen to get a good idea first, implement it, and dominate a new field (think Facebook).  But when you realize that the same companies charge customers over and over again for the ever-required upgrades and security patches (which are often bundled together so you can't keep the software you like without having it get hacked sooner or later), the difference between a software company and an old-fashioned protection racket where a guy flipping a blackjack in his hand comes in your candy store, looks around, and says, "Nice place you got here—a shame if anything should happen to it" becomes hard to distinguish in some ways.

Software performs a valuable service to billions of people, and I'm not calling for a massive takeover of software firms by the government.  And users of software have some responsibility for doing maintenance, assuming that maintenance is of reasonable cost and isn't impossibly hard to do, or leads to situations that make the software less useful.  But when a major disaster like WannaCry can cause such global havoc, it's time to rethink the fundamentals of how software is designed, sold (technically, it's leased, not sold), and maintained.  And like it or not, the U. S. market has a huge influence on these things.

Even the threat of regulation can have a most salutary effect on monopolistic firms, which to avoid government oversight often enter voluntarily into industry-wide agreements to implement reforms rather than let the government take over the job.  It's unlikely that the current chaos going on in Washington is a good environment in which to undertake this task, but there needs to be a coordinated, technically savvy, but also ethically deep conversation among the principals—software firms, major customers, and government regulators—to find a different way of doing security and upgrades, which are inextricably tied together. 

I don't know what the answer is, but companies like Microsoft may have to accept some form of restraint on their activities in exchange for remaining free of the heavy hand of government regulation.  The alternative is that we continue muddling along as we have been while the growth of the Internet of Things (IoT) spreads highly vulnerable gizmos all across the globe, setting us up for a tragedy that will make WannaCry look like a minor hiccup.  And nobody wants that to happen.

Sources:  Zeynep Tufekci's op-ed piece "The World Is Getting Hacked.  Why Don't We Dp More to Stop It?" appeared on the website of the New York Times on May 13, 2017, at https://www.nytimes.com/2017/05/13/opinion/the-world-is-getting-hacked-why-dont-we-do-more-to-stop-it.html.  I also referred to the Wikipedia article "WannaCry ransomware attack."  My blog "Ransomware Comes to the Heartland" appeared on Mar. 27, 2017.

Monday, October 31, 2016

Zombie Cameras On the Internet of Things


On Friday, Oct. 21, millions of Internet users trying to access popular websites including Twitter, Netflix, the New York Times, and Wired suddenly saw them stop working.  The reason was that for a few hours, a massive distributed-denial-of-service (DDOS) attack hit a domain-name-server (DNS) company called Dyn, based in New Hampshire.  As I mentioned in last week's blog, DNS companies provide a sort of phone-book service that turns URLs such as www.google.com into machine-readable addresses that connect the person requesting a website to the server that hosts it.  They are a particularly vulnerable part of the Internet, because one DNS unit can handle requests for thousands of websites, so if you take that DNS machine down, you've automatically damaged all those websites as long as the DNS is out of service.

DDOS attacks are nothing new, but the Oct. 21 attack was the largest yet to use primarily Internet-of-Things (IoT) devices in its "botnet" of infected devices.  The Internet of Things is the proliferation of small sensors, monitors, and other devices less fancy than a standard computer that are connected to the Internet for various purposes. 

Here's where the zombie cameras come in.  Say you buy an inexpensive security camera for your home and get it talking to your wireless connection.  If you're like millions of other buyers of such devices, you don't bother to change the default password or otherwise enhance the security features that would prevent unauthorized access to the device, like you might do if you bought a new laptop computer.  Security experts have known for some time about a new type of malware called Mirai that takes over poorly protected always-on IoT devices such as security cameras and DVRs.  When the evil genius who sent out the Mirai malware sends a signal to the infected gizmos, they all start spouting requests to the targeted DNS server, which immediately gets buried in requests and can't respond to anybody.  That is what a DDOS attack is. 

As the victim learns the nature of the requests, programmers can mount a defense, but skillful attackers can foil these defenses too, for a time, anyway.  The attackers went away after three attacks that day, each lasting a couple of hours, but by then the damage had been done.  The attacks made significant dents in the revenue streams of a number of companies.  And perhaps most importantly, we learned from experience that the much-ballyhooed Internet of Things has a dark side.  The question now is, what should we do about it?

Sen. Mark Warner, a Democrat from Virginia, has reportedly sent letters to the FCC and other relevant Federal agencies asking that same question.  According to a report on the website Computerworld, Warner has a background in the telecomm industry and recognizes that government regulation may not be the best answer.  For one thing, Internet technology can change so fast that by the time a legislative or administrative process finally produces a regulation, it can be outmoded even before it's put into action.  Warner thinks that the IoT industries should develop some kind of seal of security approval or rating system that consumers could use to compare prospective IoT devices before they buy. 

This may get somewhere, and then again it may not.  The reason is that an IoT device that can be used in a DDOS attack but otherwise functions normally as far as the consumer is concerned, is a classic case of what economists call an "externality."

A more familiar type of externality is air-pollution abatement devices on cars:  catalytic converters, the diesel exhaust fluid that truckdrivers now have to buy, and all that stuff.  None of it makes your car run better; in fact, cars can get better mileage or performance if they don't have that anti-pollution stuff working, as Volkswagen knew when it purposely disabled the anti-pollution function on some of its diesel models and turned it on only to pass government inspections.  The pollution your car would cause without anti-pollution equipment is an externality.  The additional pollution that your car causes is so small that you won't notice it.  Only when you add up the contributions of the millions of cars in a city does it become a problem.  But if you don't have anti-pollution stuff on your car, you're adding a tiny bit to the air pollution that everybody in your city has to breathe.  It's that involuntary aspect, the fact that other people are put at a disadvantage because of your action (or inaction), that makes it an externality.

The vulnerability of IoT devices to being used in DDOS attacks is an externality of a similar kind.  When you buy and install a security camera, or rent a DVR from your cable company, and they don't have enough security software installed to prevent them from being used in a DDOS attack, you're raising the risk of such an attack for everybody on the Internet.  And they don't have a choice in the matter.

Historically, externality problems such as air and water pollution have been resolved only when the government gets involved at some level.  When the externality problems are strictly local, sometimes local political pressures can resolve the issue, but the Internet is by its nature a global thing, in the main, although for reasons that are not entirely clear, the Oct. 21 attacks affected mainly East Coast users.  So my guess is that to fix this issue, we are going to have to have national or international governmental cooperation to set some rules and fix minimum standards for IoT devices regarding this specific problem.

The solutions are not that hard technically:  things like attaching a unique username and password to each IoT device and designing them to receive security updates.  These measures are already in place for conventional computers, and as IoT devices get more sophisticated, the additional cost of these security measures will decline to the point that it will be a no-brainer, I hope. 
           
But right now there's millions of the gizmos out there that are still vulnerable, and it would be very hard to get rid of them by any means other than waiting for them to break or get replaced by new ones.  So we have created a serious security problem that somebody, somewhere has figured out how to take advantage of.  Let's hope that the Oct. 21 attack was the last big one of this kind.  But right now that's all it is—just a hope. 

Sources:  I referred to the article " What We Know About Friday’s Massive East Coast Internet Outage" by Lily Hay Newman of Wired at https://www.wired.com/2016/10/internet-outage-ddos-dns-dyn/, and the article "After DDOS attack, senator seeks industry-led security standards for IoT devices" by Mark Hamblen at http://www.computerworld.com/article/3136650/security/after-ddos-attack-senator-seeks-industry-led-security-standards-for-iot-devices.html.  I also referred to the Wikipedia articles on "externality" and "Mirai" (which means "future" in Japanese).

Monday, December 07, 2015

Child's Play: Hacking the Internet of Things


A company called VTech based in Hong Kong makes smart toys for kids.  One of their tablet products can connect to a parent's smartphone with a service called KidConnect, allowing children to send photos and text messages to their parents.  Sounds all nice and family-friendly, yes?  Well, in November the website Motherboard revealed that a hacker had managed to get into VTech's servers and download thousands of private photos, messages, passwords, and other identifying information that KidConnect users had sent and received.  This has understandably upset digital media commentator Dan Gillmor, who swears in a recent Slate article that not only he will never buy any Internet-enabled toys for children, he doesn't think anybody else should, either.  Reportedly, VTech has shut down the KidConnect service until they can do something about security.  But this incident brings up a wider question:  what dangers does the Internet of Things pose for children?

In case you've been living in a cave somewhere, the Internet of Things (IoT, for short) is the idea that in the very near future—by some measures, right now—internet connections, sensors, and the hardware and software needed to use them will be so cheap and ubiquitous that lots of everyday items will be connected to the Internet, sending and receiving data that will make great changes in our lives.  The promoters of IoT naturally hope that these changes will be for the better, and can point to examples that have done that.

This matter gets close to home for me personally, because for the last several years I have supervised electrical engineering senior design teams at my university, and several of the past and current teams have worked on projects that are IoT-related.  About four years ago, one team's project was a communications system designed to monitor electric-power consumption in the home, at a finer-grain level than just what the electric meter could sense about overall power consumption.  The idea was that if consumers have a detailed profile of their electricity usage, they can make more intelligent choices about what to turn on when.  Maybe doing the laundry late at night instead of right when you get home in the afternoon will put usage into a more favorable rate period, for example. 

As I was discussing the project with the team, it occurred to me that this information could be used for nefarious purposes.  You can tell a lot about a person if you have the kind of usage information the team was planning to measure: whether the user is home, for instance, and even what appliances are used and how often.  So I brought up this ethical issue with the team and made sure that they mentioned it in their final report. 

Since then, companies such as Freescale Semiconductor have jumped into IoT-related products and devices in a big way.  (Full disclosure:  Freescale has donated equipment and funds to the Ingram School of Engineering, where I work.)  From all I can tell, the Internet of Things is going to happen one way or another, and it behooves both engineers and the general public to give some thought to any possible downsides before something really bad happens.

Returning to the question of children and IoT, we are in a peculiar position these days.  Many children and young adults are vastly more tech-savvy than their parents, and this makes it hard for the parents to institute meaningful controls on what kids do online.  In the bad old days when the list of dangerous things in the home was mainly physical—guns, knives, poison, screwdrivers near electric outlets—it was a fairly simple matter for parents to keep toddlers out of harm's way.  But in the case of some toy that hooks up to your WiFi network, odds are that the parents are as clueless as the children regarding the privacy and security measures taken by the device's maker.  VTech itself didn't know how vulnerable its servers were until some enterprising hacker cracked into them and notified the media. 

Despite living with the Internet for close to thirty years now, we still have some things to learn about it, among which are new ways of using it that are potentially hazardous.  And children are an especially vulnerable population, as everyone agrees.  It's shortsighted to think of children always as the innocent parties in these matters too.  Some kids can be downright wicked, bullying others mercilessly.  Before we got so interconnected, a bully's sphere of influence was limited to the radius reachable by his fists, but hand a bully a smartphone with some sort of anonymous chatting app on it, and it's like putting wings on a wildcat.  His bullying sphere has instantly widened to include the entire globe, limited only by language ability and time.  And we have already seen instances in which Internet bullying has driven some vulnerable individuals to suicide.

Nobody is calling for a wholesale ban on Internet-enabled toys or anything like that.  But as I have often emphasized to my students in discussions of engineering ethics, many ethical lapses in the area of engineering can be traced to a lack of imagination.  When you are dealing with a physical structure like a bridge, it's relatively easy to calculate the maximum loads and find out how strong each member has to be for the bridge not to fall down.  But in any system that is intimately bound up with the behavior of people—especially millions of people at a time—your imagination has to anticipate the character and intentions of persons perhaps very different from you, who will twist your system around to serve their possibly sinister purposes. 

That is why privacy and security concerns need to be considered at the very beginning of any project that involves the Internet, and especially when a product is intended to be used by children.  VTech clearly did an inadequate job in this area, but they can serve as a bad example to warn future designers and users of IoT-enabled gizmos.  The craft of lockmaking is nearly as old as the craft of housebuilding, and for a good reason.  There are bad actors out there, and any time we open up a channel of communication involving a private citizen or residence, it needs to be guarded with the same care that we would extend to our own physical possessions.  Beyond mere technical ability, doing that well requires moral imagination, which should be in the toolkit of every good designer.

Sources:  The online magazine Slate carried the article "Parents: This Holiday Season, Do Not Buy Internet-Connected Toys for Your Kids" by Dan Gillmor at http://www.slate.com/blogs/future_tense/2015/12/03/internet_connected_toys_make_terrible_holiday_presents.html.  That article referenced a report at Motherboard describing the VTech hack and what the hacker found, which is at http://motherboard.vice.com/read/hacker-obtained-childrens-headshots-and-chatlogs-from-toymaker-vtech.