Showing posts with label FBI. Show all posts
Showing posts with label FBI. Show all posts

Monday, June 06, 2022

Cyberattack Forestalled—For a Change

 

This blog focuses on engineering ethics situations that make headlines.  And by the nature of what makes headlines, most of the time it's bad news.  But every so often, some disaster is narrowly averted instead of going ahead and killing people or causing damage, so today I'd like to look at a small but significant success story, as reported in a recent Associated Press item.

 

Back in 2014, a hacker and activist named Martin Gottesfeld got upset about a teenager under treatment at Boston Children's Hospital who was involved in a highly publicized custody battle.  Gottesfeld decided to use his hacking skills to jam up the hospital's computer networks with junk data that took two weeks to unsnarl and cost the hospital an estimated $600,000.  The FBI got involved in tracking him down and convicting him, and he was sentenced to ten years in prison.

 

This incident familiarized the Children's Hospital IT people with the FBI.  When the FBI learned last summer that agents apparently hired by Iran were planning a cyberattack on the hospital, the FBI supplied their IT people with enough defensive help to forestall the attempted attack.  This bit of good news was unveiled recently at a cybersecurity conference at Boston College by FBI Director Christopher Wray.

  

For every really bad engineering-related tragedy that hits the headlines, there are usually several other less harmful or even harmless incidents that go unreported, either because the results were not bad enough to make the news, or because someone fixed the problem before it got really out of hand.  The FBI's success in preventing Boston Children's Hospital from falling victim to Iranian-sponsored cyberterrorists is in this category. 

 

In today's hyperspeed news cycle, the traditional slant toward bad news that has existed ever since print media was invented has only gotten worse.  This means that most of what we learn about institutions of all kinds—government agencies, the legal profession, the medical profession, and even religious organizations—tends to be critical or derogatory in some way.

 

Now to some extent, that is as it should be.  One important function of a free press is to search out wrongdoing and incompetence and expose it to the light of publicity, where one hopes that the democratic process, or embarrassment, or something, will cause an improvement in the situation.  So it's only natural that editors choose stories about something going wrong over happy-clappy items that say how wonderfully some new product is working, or how some federal agency successfully rescued people from a disaster.  But some good news does get out anyway.

 

Specifically with regard to the FBI, its popularity among the public has shifted in recent years.  According to a 2019 Pew research poll, the percentage of Americans with a favorable opinion of the FBI remained remarkably constant among both Democrats and Republicans from 2010 to 2016, within a few percentage points of 70%.  But after that, partisanship began to show, with the percentage of Republicans favoring it dipping to about half, and the percentage of Democrats rising above 70%.  Still, on average, as of three years ago, the FBI was still favorably viewed by a majority of U. S. citizens, according to Pew.

           

We depend so much in complex industrial societies for the proper functioning of institutions that it's hard to imagine what we'd do if they broke down.  But there are a number of ways public institutions fail, and one of them is to lose the public's trust. 

 

Even if an institution's actual performance is just as good as it ever was, if somebody convinces a lot of people that the institution is untrustworthy, it's going to be harder for the institution to carry out its job.  On the other hand, prior good experience with an institution tends to carry forward favorably.

 

The Boston Children's Hospital is a case in point.  From the 2014 experience, it had a positive view of the FBI and probably some personal relationships that made it easy for the FBI to convince them of the seriousness of the recent Iranian threat.  Consequently, they took action that successfully prevented the attack. 

 

But they didn't have to take the FBI seriously, and if this had happened to an organization that either had no prior history with the FBI, or a negative one, the protective advice might well have been ignored, to the detriment of everyone involved.

 

Some of the largest technically-intensive institutions these days which have taken big hits in their public perceptions are the social-media firms:  Facebook, Google, Twitter, and company.  Elon Musk represents no one other than himself, presumably, but his recent move to buy Twitter and take it private is being applauded by those who feel that Twitter has been too high-handed in censoring and banning certain views and people from their system.

 

At the same time, social media, along with the way the Internet treats news in general, bear a lot of responsibility for driving a wedge between the public and all kinds of institutions, social media giants included.  Now that real-time feedback has been finely tuned to maximize "engagement," millions (billions, if you count global numbers) are constantly whipped into outrage about something, and that something usually involves some kind of public institution—if you broaden the definition of "institution" to include things like the Kardashians.

 

It looks like Aristotle's advice of finding a happy medium needs to be followed here.  All-good-news-only media are confined to totalitarian countries such as Russia, and that extreme is to be avoided.  But it looks like we may have something closer to the opposite extreme, an institution-corroding situation in which the only things you hear about the government, educators, legislators, media personalities, and churches is bad news. 

 

I think the real answer lies not so much in yet more government regulation, or eccentric billionaires taking media companies private, but in a more mature citizenry who will not let themselves be coerced into a kind of universal cynicism, but instead use the ancient virtues of justice and prudence to find out the truth amid the smog of disinformation and hype.  And achieving that maturity has to happen one person at a time. 

 

Sources:  The AP website carried the item about Boston Children's Hospital at

https://apnews.com/article/russia-ukraine-technology-health-middle-east-e4f8e7145e4b4447a331d4b0cc5a5bd3.  I also referred to the Pew poll report summarized at https://www.pewresearch.org/politics/2019/10/01/public-expresses-favorable-views-of-a-number-of-federal-agencies/. 

Monday, October 02, 2017

Internet Security Isn't Child's Play


Full disclosure:  my wife and I have never had children.  The closest we have come to full-time responsibility for someone younger than 80 was when our ten-year-old nephew came to stay with us for part of the summer of 2013.  So what I have to say about the hazards of buying smart Internet-connected toys for your kids is, from my point of view, entirely hypothetical and untouched by the seasoning of personal experience.  Nevertheless, it's a new kind of problem and those with parental responsibilities need to be aware of it.

For the last several years, one of the biggest trends on the consumer-electronics horizon has been the Internet of Things (IoT).  It's now so cheap to connect tiny, inexpensive devices to increasingly powerful cloud-computing apps on the Internet that companies are falling over each other trying to get their IoT-enabled gizmos to consumers.  And the gold-rush analogy is especially apt for the toy market, which is highly seasonal and driven by novelty even more than the rest of the consumer business. 

When IoT came along, we began to see a flock of toys that connect to the Internet for some of the same reasons devices for adults do:  message sharing, video recording, GPS-enabled location features, and so on.  But when adults use IoT-enabled equipment, there is at least a presumption that they can read instructions and take whatever precautions are needed to keep malign third parties from exploiting the window into your personal life that bringing an IoT-enabled device into your home opens. 

Not so with children.  A recent story in the Washington Post details how the FBI had issued a consumer notice about "smart toys" that connect to the Internet.  Inspired partly by recalls in Europe of a talking doll that a hacker could use as a listening device, the FBI says that parents should be very careful about purchasing or setting up any toy that can connect to the Internet. 

While I'm not aware of any crimes that have been shown to be committed by such means, it's not hard to imagine such a situation.  Organized housebreakers could take a look around your home while little Johnny is dragging his Internet-enabled megatherium through the living room, and use its GPS to find just where that priceless collection of jewels from the court of Louis XIV is kept on display.  Even creepier is the notion that a crook bent upon kidnaping or worse could start talking to your daughter through her doll:  "Yes, I want you to meet a friend of mine.  He's waiting right outside the front door.  Mommy's asleep, isn't she?  Come on outside . . . ."  Sounds like a bad horror film, but the technology is there already.

The FBI's recommendations are not surprising, for the most part:  know whether the toy you're thinking of buying has been reported for problems with security, read the disclosures and privacy policies provided with the toy (if any), monitor your child's activity with the toy, use good password hygiene, don't tell the company any more than you have to when setting up the toy to work through your wireless system, etc.  Some of this advice falls in the wouldn't-it-be-nice category, such as reading disclosure and privacy policies.  First, hire a lawyer to interpret the policy, if it's written like most boiler-plate software agreements.  And while monitoring a child's use of the toy is a good idea, parents can be only one place at a time, and one reason for buying a child toys is so they can amuse themselves and not depend on you to be there fending off boredom for them every second.  Or at least that's the impression I get from a few parents I know.

The hazards of smart toys are just one more chink in the Swiss cheese of what used to be armor that most parents erected around their children.  Here's just one example of that armor from my own childhood, back when men were men and megatheriums roamed the earth. 

My father was a six-foot-two, two-hundred-pound repo man for a few years.  Repossessing cars from uncooperative borrowers is not for the faint of heart, and in a crisis I'm sure he could cuss as well as anybody.  But until I was a teenager, I never heard a swear word pass his lips, even when I drove my tricycle into the ladder he was using to hold a paint can and dumped a gallon of gray oil paint all over his head.  (Well, maybe he did cuss then and I just didn't understand what he was saying.) 

The point is that he went out of his way to create a kind of bubble of innocence or protection around us children.  There were some TV shows we couldn't watch and some magazines we couldn't look at, even back in the halcyon 1960s.  Back then, of course, electronic media had just barely started to infiltrate the home, radio and TV being the only means of entry.  Since both my parents were gone before the Internet really got going, I will never know what their reaction to it would have been.  But suffice it to say I don't think my father's impression of it would have been positive.

Some ages exalt and glorify children, and others like ours seem to treat them as kind of an optional hobby for adults, instead of the seedbed of the next fifty to hundred years of civilization.  Like it or not, children in advanced industrial societies are going to grow up in a world where the Internet of Things is as routine to them as electric lights were to people my age.  The main role of parents as parents is to prepare children to live in the world they will inhabit, and hopefully make it a better place.  But first the children have to survive into adulthood.  And while the chances of anything bad happening to your child as a result of a smart toy is remote, it's one more thing to worry about in the process of raising children.  And at least we've been alerted to this problem before anyone has been harmed, as far as we know. 

Sources:  Elisabeth Leamy's article "The danger of giving your child 'smart toys'" appeared on Sept. 29, 2017 in the online version of the Washington Post at

Monday, February 22, 2016

Apple Versus the Feds: How a Smartphone Stymied the FBI


When Syed Farook and Tashfeen Malik died in a hail of gunfire last December 2 after killing 14 people at a San Bernardino office party, the FBI recovered Farook's iPhone within a few hours.  One of the critical unanswered questions about the San Bernardino shootings is whether the couple had outside help, and the data on the iPhone may hold the answer.  Problem is, the FBI can't get at the data, and Apple, the iPhone's maker, won't help them.

Why not?  Let's let Tim Cook, CEO of Apple, answer that one:  "[T]he U.S. government has asked us for something we simply do not have, and something we consider too dangerous to create. They have asked us to build a backdoor to the iPhone."  A little historical perspective is in order to put this situation into context.

With the advent of powerful digital computers, advanced encryption algorithms were designed and adopted by both sides of the Cold War (both the U. S. and the Soviet Union) for secret communications in the 1970s and onward.  The U. S. National Security Agency, long used to spying on analog communications in which good radios were the most elaborate equipment needed, found itself behind the technology curve and spent millions on advanced computing technology to maintain its ability to crack enemy codes.  The computing power of those early NSA computers now resides on your smartphone, and after a run-in with NSA a few years ago involving spying on Apple, the tech company and its president resolved to do a better job than ever in protecting its customers' privacy.  The latest iPhone operating system has a feature that not only encrypts the user's private data, but destroys the internal encryption key if it detects more than 10 attempts to unlock the phone using the 4-digit password.  After that happens, nobody but God can retrieve the data. 

At first the FBI was hoping that the phone was backed up to the iCloud, where the data might be recovered.  But it turns out that the automatic backup feature was turned off last October, possibly by Farook to avoid just such snooping.  After trying everything they could think of, including things Apple suggested, the FBI has asked Apple to do something that the firm claims is unprecedented. 

The FBI wants Apple to write a new operating system for Farook's phone that will allow unlimited password tries electronically, which will allow the FBI to access the phone's data.  They say it will only be used on Farook's phone, and so there is no risk to anybody else's phone.  The FBI has put this request in the form of a court order, and Tim Cook has vowed to fight it.

Why?  Apple claims the risks of that system getting loose, either accidentally or by command, are simply too great, and they have dug in their heels.  For example, it has been suggested that once it becomes generally known that Apple has developed such a backdoor, repressive regimes will order the firm to give it to them, or else kick Apple out of the country.

This is not the first time that Apple and the federal government have been at loggerheads over encrypted data.  In a 2014 case, Apple was ordered to extract data from an iPhone, but it is not immediately clear from the record whether they complied.  In both that case and the San Bernardino situation, the FBI cited as its authority the All Writs Act of 1789, which basically lets courts issue writs (orders) "necessary or appropriate in aid of their respective jurisdictions and agreeable to the usages and principles of law."   To the ears of this non-lawyer, it sounds like the law basically says you can do whatever you want, but the Act is typically hauled out as a kind of last resort, as subsequent case law has erected a set of four conditions that must be fulfilled before a court can issue an order under the Act.  Of course, the FBI thinks the conditions are fulfilled, and Apple doesn't.

Apple's stand is based on the idea, not that common among high-tech companies, that even Apple doesn't have any business with your personal data, which is why they designed the iPhone operating system to be so hard to crack.  This differs from practices of other firms, who happily mine their customers' private data for commercially valuable things like brand names and so on.  Privacy advocates from across the political spectrum have joined Cook in his opposition to the order, and the outcome of this case could have wide implications not only for the FBI and smartphones, but for digital privacy generally.

National Review commentator Kevin Williamson (from whose column I first learned about this matter) takes the view that the FBI is taking the easy way out by simply ordering Apple to do its job.  There is evidence to support this claim.  For example, in its instructions to Apple, the FBI asked them to rig a bluetooth link to the phone so they could try the 9999 different number combinations electronically, instead of having to make somebody sit there and do it by hand.  This apparently minor detail has the aroma of a royal order to underlings—"and while you're at it, fix it so I don't mess up my manicure wearing my fingers out on that touchscreen of yours."  Back in the days of telephone hacking in the 1960s, teenagers with time on their hands would amuse themselves by dialing all 9999 numbers in a given 3-digit telephone exchange (e. g. 292-0000 to 292-9999) just for the thrill of discovering the test and supervisory numbers the phone company used for long-distance routing and maintenance.  Apparently, the FBI can't be bothered with such tedium.

The matter is in the hands of lawyers now, and if the issue does indeed go all the way to the Supreme Court, its fate may well depend on whether President Obama gets to appoint a new member after Justice Scalia's recent demise, or whether the next president does, or whether a split Court ends up doing nothing (split decisions leave the lower court's decision standing).  Whatever happens, I admire Tim Cook for taking a principled and consistent stand for a cause that he could so easily abandon:  the notion that privacy still means something in a digital age.

Sources:  Kevin Williamson's column "Hurray for Tim Cook" can be found at National Review Online at http://www.nationalreview.com/article/431491/apples-tim-cook-right-resist-governments-demand.  I referred to articles by ABC News reporter Jack Date carried on Feb. 19, 2016 at http://abcnews.go.com/US/san-bernardino-shooters-apple-id-passcode-changed-government/story?id=37066070 and Feb. 17 at http://abcnews.go.com/US/fbi-iphone-apples-security-features-locked-investigators/story?id=36995221.  I also referred to an article in The Guardian online at http://www.theguardian.com/technology/2016/feb/19/apple-fbi-privacy-encryption-fight-san-bernardino-shooting-syed-farook-iphone, and Wikipedia articles on encryption software and the All Writs Act of 1789.

Monday, September 07, 2015

Stingray and the Swiss Cheese of Electronic Privacy


The main distinguishing characteristic of Swiss cheese is that it's got holes in it.  This image came to mind when I read a recent report about a cellphone tracking device colloquially known as Stingray.  These expensive, sophisticated devices are contributing to a pernicious double standard about electronic privacy.  Private citizens on the one hand, and local and state law enforcement authorities on the other hand, appear to be working under very different rules.

Ordinary U. S. citizens are forbidden to eavesdrop on private electronic communications over the airwaves.  Back in the days when cellphones transmitted easily received analog signals, this meant you could not buy scanners that covered cell-phone frequencies.  And wiretapping—connecting a listening device to a telephone wire—was something that only authorized law enforcement people could do.  Back then, even the cops first had to get a court to issue a warrant for a wiretap, which was limited as to time and the target of the wiretapping.  Just to make sure that these restrictions weren't overwhelmed by new technological developments, in 1986 Congress passed the Electronic Communications Privacy Act (ECPA), which extended restrictions on landline communications to the then-new wireless types.

Then there was 9/11 and a burst of foreign terrorism, and a need arose to track cellphones in foreign countries that were being used for nefarious purposes, like setting off improvised explosive devices.  In response to this demand, the Harris Corporation developed a clever system that has come to be called the Stingray.  In order to track and eavesdrop on a target cellphone, you set up the Stingray in the general vicinity of the target—a few dozen or hundred yards is probably sufficient.  When the target phone is activated, the Stingray pretends it's a real cellphone tower, sending out a "pilot" signal that is stronger than the genuine tower's pilot nearby, and capturing not only the target phone, but many others in the vicinity.  In its most sophisticated mode, the Stingray performs a real-time decryption of the encrypted cellphone data and relays the content of the phone call (or text message, or what have you) to the legitimate system, while making copies for the cops.  In this mode, any calls the target phone originates go through as usual.  Only, the law enforcement people using the Stingray can hear and read everything in the vicinity.

I can't refer you to an advertising brochure or an official website on the Stingray, because Harris cloaks the device in secrecy.  Any agency buying one has to sign a non-disclosure agreement in which they promise not to divulge any details about it.  Nevertheless, the technology has become quite popular among the better-heeled state and local law enforcement agencies that can afford up to a half-million-dollar price tag.  And it is by no means clear that the agencies get proper court authorization before using the Stingray.  So your phone call or text might be showing up on a police computer near you—without your knowledge, of course.

In recent months, considerable information has leaked out about the Stingray and how it is being used, and there's even a Wikipedia webpage devoted to the technology.  It was most recently in the news when Deputy U. S. Attorney General Sally Yates announced on Sept. 3 that Federal investigators will now have to obtain a judge's permission before using cellphone trackers.  As recently as six months ago, the Feds were arguing in court that no such permission was necessary.  So on the federal level at least, some measure of protection has been restored to electronic privacy.  However, the ruling does not apply to state and local jurisdictions, which can presumably still use the Stingray and similar devices with impunity.

This is only one of many situations in which technology has outrun the legal system's ability to adapt to it.  Despite the blanket prohibitions of the ECPA, state and local law enforcement agencies are apparently using Stingrays frequently with or without court approval, depending on what the patchwork legal context in the specific region will let them get by with.  Sometimes, use of the device is revealed only in a court case when defense attorneys start asking embarrassing questions.  In Tallahassee, Florida, the state prosecutor gave an armed-robbery suspect a reduced sentence rather than being forced to disclose details of how a cellphone was tracked to the criminal's house—by use of a Stingray, presumably.

It may be the case that most, if not all, uses of this technology are approved by courts, although in some cases judges have complained that they were not aware of what exactly it was they were approving.  In that case, we are in principle no worse off privacy-wise than we were under the old regime of wiretapping laws, in which a court order was required to allow the telephone company technicians to permit a wiretap. 

We actually have two sets of Swiss cheese here:  one is the public's Fourth Amendment protection against unreasonable searches and seizures, and the other is the Harris Corporation's attempts to keep its technology out of the public eye.  Any system that has a 4500-word article on Wikipedia about it is no longer secret in any meaningful sense.  But nobody can sit down and build one for themselves just from the information on Wikipedia, and as long as nobody steals a physical unit and tries to reverse-engineer it, Harris is probably safe from getting their prize cellphone-tracker knocked off. 

There are two conflicting stakes here:  one on the part of the general public not to have its private communications eavesdropped on at the whim of a local police force, and another on the part of Harris Corporation not to have their advanced and very profitable cellphone tracker either copied or rendered useless by equally sophisticated bad guys who figure out some way to foil the Stingray.  One easy way to foil it is simply not to carry a cellphone, but for most people nowadays, that's like telling them not to breathe.  For the forseeable future, anyway, many crimes will involve cellphones one way or another, and the Stingray will continue to be useful in tracking down criminals.

My metaphorical hat is off to Deputy Attorney General Yates, who has at least clarified the situation at the federal level so that Stingrays will be used only with the proper authorization—we hope.  Maybe the state and local agencies will now follow the Federal lead and be more circumspect about how they use the devices, at least until the next round of electronic spy-and-counterspy warfare comes to pass.

Sources:  The New York Times article "Justice Dept. To Require Warrants for Some Cellphone Tracking" appeared on Sept. 3, 2015 at http://www.nytimes.com/2015/09/04/us/politics/justice-dept-to-require-warrants-for-some-cellphone-tracking.html.  I also referred to an earlier New York Times article "A Police Gadget Tracks Phones—Shhh-It's a Secret" at http://www.nytimes.com/2015/03/16/business/a-police-gadget-tracks-phones-shhh-its-secret.html.  The Washington Post carried the article about the plea bargain in Florida at https://www.washingtonpost.com/world/national-security/secrecy-around-police-surveillance-equipment-proves-a-cases-undoing/2015/02/22/ce72308a-b7ac-11e4-aa05-1ce812b3fdd2_story.html, and I also referred to the Wikipedia articles "Stingray Phone Tracker" and "Telephone Tapping," and a How Stuff Works article on how wiretapping works at http://people.howstuffworks.com/wiretapping3.htm.