Monday, May 31, 2010

Privacy and Social Media: Flap Over Facebook

In a protest over how the social media website Facebook treats privacy issues, 24,500 users have reportedly made plans to commit "digital suicide" on June 1. They have announced that they will take all their content off Facebook in a show of solidarity. This is only the latest incident in a controversy over how Facebook deals with the complicated issue of privacy. There are layers of irony and paradox here that could stand some exploration.

One of the main problems cited in the controversy is that it is very complicated to either figure out or alter one's privacy settings in Facebook. In an attempt to allay this concern, Mark Zuckerberg, Facebook's 26-year-old CEO, has instituted changes which will reduce the number of different settings and pages you have to visit in order to adjust your privacy status. Whether this will solve all the perceived problems remains to be seen.

And of course, being seen is one of the main reasons people get on Facebook in the first place. I speak as a near-total outsider to the whole social media phenomenon. I do not tweet or twitter, I do not have a Facebook presence, and the closest I have come to any of this stuff is when I watch my wife put old family photos on LifeSnapz, a family-photo-sharing site based in Chicago. Just out of curiosity, I did a little experiment this morning in trying to party-crash my wife's Lifesnapz material, registering simply to see if there was an easy way for me to look at the stuff she's posted without her invitation. There wasn't, but then, I'm not a determined hacker, either.

Facebook, apparently, is a whole different proposition. There's all kinds of publicly accessible stuff on Facebook, and that is part of the problem. Zuckerberg and his staff face conflicting priorities. On the one hand, he would like people on his site to share as much of their private information as possible, both because it gets other people to be more active and do the same, and because it helps Facebook's advertisers and other third parties get what they want as well, namely, information on potential customers. On the other hand, there are clearly limits to what some users want to share, and at times Facebook has gone beyond those limits.

I'm sure this schizophrenic conflict is experienced by many individual users too. There's the whole issue of "sexting," for example, which is not so much a problem on Facebook (although I'm sure it has come up) as it is among teenagers with camera-equipped cell phones. Back when telephones were telephones and cameras were cameras, your average fifteen-year-old girl would have had to take considerable conscious thought and planning to put a camera in her purse, take it to some private location, partially disrobe, take a picture of herself (or have a friend do it, more likely), get the film developed (without having the guy at the drugstore counter blow the whistle on her), and physically hand the salacious product to her boyfriend. Maintaining one's intention under such a series of obstacles in 1995 would have daunted all but the hardiest future porn stars.

But now that most teenagers have cellphones, and most cellphones have cameras, and there are no snoopy drugstore-counter clerks or other humans in the pathway between one phone and another, the technology has made this sort of misbehavior so much easier that a lot of kids do it. The only thing stopping them is a fear of adverse consequences if they go too far, and many teenagers don't take the forethought to consider such consequences until they have happened. The same kind of ease-of-use issues are at the root of privacy concerns on Facebook too.

If a website is too hard to use, people won't use it, but what "too hard to use" means depends on why people are using it in the first place. When "use" grows to include fine-tuning your privacy settings, what was formerly seen as adequate becomes inadequate: hence the protests and Zuckerberg's efforts to make setting one's privacy controls easier.

The effect of all this technological soul-baring is all in the direction of letting other people know more about you than formerly. Back when total obscurity was the default setting of 99.999% of the world's population and it took massive amounts of resources simply to send a letter from one end of Europe to another, privacy was essentially built into the hardware of existence, and so there was no special need to safeguard it. But now that there are strong economic forces favoring the universally-accessible blatting of one's most intimate secrets to all and sundry, we are faced with the novel problem of deciding what, if anything, was good about the old situation, and what parts of it do we want to preserve?

There are quiet human virtues which are so low-profile that they attract little attention, and in a publicity-conscious age tend to fall out of consciousness altogether. But without them, the social fabric wears thin and we find ourselves missing these virtues without really knowing what went wrong. Sexual purity is one such virtue; discretion—the ability to share information only when it is the right thing to do—is another. Those who are flagrantly lacking such virtues get most of the immediate attention, and if you believe the old saying that there's no such thing as bad publicity, it looks like no harm is done. But when it comes time for such people to desire a special relationship, or true intimacy, and they find that there is nothing special or private saved up that they can share with that special person—that's when these kinds of virtues are missed.

I hope Facebook fixes their privacy-control problems, but I doubt that they will ever post a warning on their site about the virtue of discretion, at least in so many words. They may say things like, "If you don't want people to know certain things, don't post them," but that doesn't get to the core problem. The core problem may be that we have a whole generation which has a very limited idea of what true privacy is. And as they get older, they may wish they had learned.

Sources: The item describing the digital suicides was carried by a media outlet in India at http://www.business-standard.com/india/news/facebook-faces-digital-suicides-today/396539/. Mr. Zuckerberg's announcement was covered by CNET on May 26, 2010 at http://news.cnet.com/8301-13577_3-20006054-36.html.

Monday, May 24, 2010

Military and Civilian Engineering: The X-37B and the Space Shuttle

The profession of engineering has deep roots in military culture and military organizations. Both in France and the U. S., the first engineering schools in the late 1700s and early 1800s were military academies, and the first people trained in what we would now call the profession of engineering were military servicemen educated in the technicalities of forts, armaments, and related matters. When such training proved to be useful in fields other than war, the first practitioners of non-military engineering were called "civil engineers" to distinguish them from the only other kind at the time. Although the military employs only a minority of engineers today, the story of the X-37B says a lot about the different ways a military and a civilian organization go about achieving similar goals.

The X-37B is a recently launched unmanned space vehicle that the U. S. Air Force has developed, apparently to maintain its ability to launch spy satellites now that the last scheduled Shuttle flight is taking place as I write this. Like the Shuttle, it is a reusable craft with vestigial wings whose design was based on the Shuttle when NASA asked Boeing to develop an earlier version, the X-37, back in 1999. During the last decade, according to Wikipedia, the NASA design served as the basis for the Air Force's X-37B, which was announced in 2006 and then cloaked mostly in secrecy. Unlike NASA, whose proceedings are open and publicized almost to a fault, the Air Force gives out only such information as suits its purposes. So for example, we have only an early artist's conception of what the X-37B really looks like. But when the launch of the first X-37B took place last month (April 22, to be exact), amateur satellite observers and others figured out pretty fast what was happening.

The Air Force has always had a claim on a certain number of Shuttle flights to deliver its most advanced spy satellites into orbit. Even now we do not have full data on the nature of these satellites, but there is enough indirect evidence to show that they produce images superior to anything you can find on Google Earth, for example, and can be reconfigured and steered to watch trouble spots in most parts of the world as needed. During the Cold War, these satellites played an essential role in arms-reduction verification and many other aspects of that conflict, and after the Soviet Union came apart the programs continued for obvious reasons, since having eyes in the sky better than anyone else's will always provide a strategic advantage in both war and peace.

As long as the Shuttle was in operation, it could be relied upon to deliver new spy satellites, but the hiatuses caused by the two major accidents (Challenger in 1986 and especially Columbia in 2003) plus the planned ending of the Shuttle program inspired the Air Force to find an alternative. The nice thing about a military organization is that it is largely unencumbered by democracy. Democracy, I am convinced, is the best way to conduct public affairs. But once a specific technical objective has been decided upon, a well-run military organization has a much better chance of delivering the goods on time and under budget than other types of organizations. So now at fairly low cost (in the hundreds of millions rather than many billion, apparently) and in about a decade (including the seven-year NASA development, or even less time if you consider only the Air Force version), we have a space vehicle that does one of the most important functions of the Shuttle. And by its very nature, nobody on board can ever get killed because nobody is on board to start with.

Of course, the X-37B has a limited range of tasks it can do. Compared to the Shuttle, it is a butter knife to the Shuttle's Swiss army knife—it can do only one thing, but it should do it pretty well. Advances in remotely piloted vehicles and robotics have allowed the Air Force to do without people on board, and while this may lead to situations that a person in space would come in handy for, you can still do a lot with robots nowadays, only perhaps slower. But during an X-37B flight, there is no time pressure to get a task done before the oxygen and food runs out and the humans have to be carted back safely to Earth. Things can just take as long as they take. So in some ways, operations with the X-37B should be more deliberate and therefore better planned and executed.

Does this mean I favor a military type of organization for all engineering works? To a large degree, that is what we already have. The large commercial firms that do engineering have mimicked military organization in more ways than you might think. An engineer at a large company may not have to salute his boss or do kitchen-police duty for getting to work late, but everyone in a company knows there is a strict chain of command that one violates at his or her peril.

Of course, there are problems with the military style of doing things as well. When input from a large number and variety of constituencies should be considered, as in a public work that affects lots of people, the military style does not function that well. This problem has played out in such situations as the deteriorated state of dikes and flood protection systems that was the nominal, but not total, responsibility of the U. S. Army Corps of Engineers in New Orleans before Katrina struck. To be fair, the Corps had its hands tied with regard to much of that infrastructure, and things might have gone better if it had taken over complete control of all aspects of the system. But that was a political impossibility.

Nevertheless, when you have a specific, clear-cut job to be done, it looks like handing it over to the military arm can work pretty well. That assumes, of course, that the military either possesses or has access to the necessary technical expertise. The Deepwater Horizon oil spill that is still going on in the Gulf of Mexico has inspired calls to shove British Petroleum out of the way and put the military in charge. As I mentioned a few blog posts ago, the problem with this idea is that BP and their contractor Transocean have all the smarts in this case. But if the problems that BP and Transocean are having are organizational rather than technical, they might benefit from having the Marines run things for a while.

Sources: The Wikipedia article "Boeing X-37" supplied most of my data on the NASA X-37 and the Air Force X-37B.

Sunday, May 16, 2010

Google Admits Sniffing Private Info

In a blog post last Friday, Google admitted that since 2006 its Street View photography cars have also been collecting bits of private data from unencrypted private WiFi networks as the cars drive by. According to Google, the collection of private data in this way was unintentional, but it has landed them in hot water with the German data protection authorities whose inquiry prompted the discovery.

As a rule, European states have a greater regard for data protection and privacy issues than many jurisdictions in the Americas. So when a (presumably) American engineer working for Google thought it would be a good idea to collect just the network identification data of wireless networks that the Street View car passed by, apparently no one at Google saw anything objectionable in the idea. The problem was, the software that the engineer wrote also collected what is called "payload data"—that is, content of emails, websites being viewed, and whatever else goes over one's unencrypted wireless network. (Encrypted networks were not sniffed.) I can imagine that it was easier simply to grab and store all the data at once and then sort out the network ID stuff later, than it was to do it "on the fly" while the car was in motion. But this meant that everywhere the Street View cars went—and by now they've traveled probably millions of miles in most cities of the world—their hard drives were accumulating little pieces of private information that were exactly correlated with location and scenery. And presumably, as Google is a well-run engineering outfit, all this data was carefully collected and archived somewhere, even though no one seemed to realize that the private stuff was in there along with the network ID information.

Then along comes the data protection agent of Hamburg, Germany, who asks just exactly what are you collecting with that car? What is all this wireless stuff for? Let me see the hard drive. It's encrypted? Hum, well, tell me what's on it. And Google, in accordance with one of its founding precepts, namely, "Don't be evil," honestly checked and honestly found to its dismay that it had been collecting all this private stuff for the last four years, all over the world.

There is some good news and bad news here. The good news is, to all appearances this was a genuine error, not a sinister plot to collect blackmail data on people all over the world so as to increase Google's bottom line illegally. And when challenged by a duly constituted authority, Google personnel didn't lie, cover up, or illegally dispose of the data. Instead, they did the short-term hard thing, which was for Alan Eustace, the Senior VP of Engineering and Research, to post a blog admitting that an earlier post was in error, that Google did indeed inadvertently acquire and collect private data, and that they were going to do everything they can to amend the situation.

The bad news is, at least for Google, that their honesty has not mollified various European authorities to any great extent. The very collection of such data, even if you do nothing with it (as Google apparently has not) is illegal in Germany, and according to a New York Times report, officials are going to consult the European Commission to decide what penalties will be appropriate. The Street View feature itself has already been under attack there, and one German legislator has introduced a bill that would allow private citizens to request that their property not appear on Street View at all, with a hefty fine for Google for each incidence of non-compliance. This law would seriously compromise the usefulness of Street View, and it might be simpler for Google to just make Germany disappear altogether—so to speak.

This incident highlights the fact that a single engineer working on something that will be used in a large project should take the trouble to consider all the places the software or hardware might be used, not just some of them. Google has a reputation for putting huge resources behind innovative notions, and that's good, but with those resources come the responsibility of being more careful than is necessary if what you're working on involves only you and Joe, the neighbor down the street. I'm sure this lesson will be remembered and pounded into the heads of future engineers whose products are used in places that are more touchy about data security and privacy than, say, Austin, Texas.

It also shows the limitations of the idea of privacy in a globally interconnected age. Already, if you carry a cell phone in the U. S. and many other countries, your cellphone company "knows" where you are at least to within the accuracy of a cell (which can be any size from hundreds of feet wide to several miles), and soon there may be software and hardware on phones that will use GPS and other technologies to narrow that down to within a few yards. In general, we trust our phone companies not to use this information to our detriment, but so far, it is just a matter of trust, not law. And when it suits the law's purposes, as when a criminal is being tracked down, phone companies can be made to yield up that data. Any time you walk outside, satellite photography can almost make out your visage as you smile at the nice sunshiny day, not to mention the thousands of security cameras everywhere, and if you go inside and get on your computer, all kinds of folks can find out all kinds of things about you without your knowledge. In the U. S. we are perhaps more content with less of certain kinds of privacy than other countries are, in keeping with our long history of freedom.

Whether we will live to regret what may be viewed in the future as an excess of openness, or whether Europe will strangle data innovation with cumbersome laws that leave it increasingly without new services, only time will tell.

Sources: The New York Times article on Google's admission appeared in the May 14, 2010 online edition at http://www.nytimes.com/2010/05/16/technology/16google.html. Mr. Eustace's blog entry on the subject appears at http://googleblog.blogspot.com/2010/05/wifi-data-collection-update.html. And full disclosure: The website blogspot.com on which this blog appears is owned by Google.

Tuesday, May 11, 2010

Offshore Oil Regulation: Who's To Judge?

As Transocean and British Petroleum try to lower yet another big box to stop the underwater gusher that threatens to turn many Gulf beaches into hazardous-waste environments, President Obama is talking about taking a hatchet to the U. S. Minerals Management Service, the agency that both oversees many aspects of mining and well-drilling, and collects (or is supposed to collect) fees from private entities who have permission to mine or drill on government land, or water. Clear? Well, the conflict that the President sees is that the people who stand to benefit (or at least to make their agency look good) from lots of drilling and royalties derived therefrom, are the same folks who are supposed to play policeman and make sure all this is done safely. While splitting the agency into enforcement and collection halves is a nice idea on paper and gives politicians a sense that they're doing something about the problem, it may just paper over a deeper problem: how do you regulate something that is so complicated that only the people who do it really understand it?

Time and again, reporters have shown how the government regulators of many industries, from petroleum to communications to finance, are either former employees of the very firms they are charged with regulating, or (what seems even worse) rely on the companies they regulate to do the actual inspecting, and take their word that things are going well.

On the surface, this kind of thing looks bad. We all feel that a person who has depended for their livelihood on a particular organization or industry will be prejudiced in favor of that entity, even if the former private employee enters government service to regulate the very business they used to work for. So what is the alternative?

The only way to get rid of all possible prejudice of this kind is to select regulators who have no association whatsoever with oil wells, or radio stations, or banks, or whatever the target of the regulator's scrutiny is. But right away we run into a problem: if you've never drilled a well, or run a radio station, or worked in a bank, can you know enough to regulate it?

Sometimes, maybe so. My father was a banker, and every year or two he'd come home complaining about the upcoming visit of the bank examiners. He never told me what their backgrounds were, but I imagine that back then, a degree in general accounting was probably okay. But if a banker was determined to pull a fast one, it seems like it would be better if the fellow trying to catch him in the act had actually stood in his shoes and learned all the little details of procedure and so on that allow clever nefarious schemes to succeed. It's the old "it takes a thief to catch a thief" idea (no aspersions on bankers intended).

The same problem happens to the nth degree when a highly technical field such as offshore oil production is in question. As I learned long ago when I once thought my Ph. D. in electrical engineering qualified me to fix an oscilloscope, nobody knows a system better than the people who work inside it day to day. So handing my scope over to a technician with a two-year degree and five years of experience fixing just those kind of scopes is going to work a lot better than me trying to fiddle with the thing. There is a lot of what chemist and philosopher Michael Polanyi called "tacit knowledge" out there: stuff that you can't find in books, but which is essential to the proper functioning of machines, systems, and organizations. And nothing teaches tacit knowledge like experience.

This same issue has arisen when questions are asked about why the U. S. government or the Coast Guard or the Marines or somebody with a uniform hasn't been called in to fix the Deepwater Horizon oil spill, instead of letting the same doofuses who broke it in the first place try to do it? The simple answer is that those "doofuses" happen to be the world experts on this kind of thing, and even experts foul up every now and then. Asking the government to shove the private owners aside in order to step in would be pushing away the best expertise we have, and that would be simply stupid.

In the attempts to fix the Deepwater Horizon spill, we may be witnessing the outworkings of a kind of failure that results not just in shifts in government bureaucracies, but fundamental technical changes that render a whole industry safer and better equipped to do its job in the future. Engineer and historian Henry Petroski has shown how certain failures in nineteenth-century iron bridges closed down whole avenues of design and opened up other ones. Despite what they (we?) teach you in school, you can sometimes learn more from failures than you can from success. Once that well is capped, or plugged, or committed to perdition some way or other, and all the hearings are over and the reports written, we will know a lot more about how this accident happened, and how blowout preventers with double and triple backups can nevertheless fail. But the best people to learn this stuff are the very ones who are going to go out and do it better next time. All the government regulators you can hire straight out of school are not going to know quite as much as the experts they regulate, and so the answer is not in simply more regulation, but smarter regulation, and smarter engineering. Let's hope we get both.

Sources: The New York Times carried an article about President Obama's plans at http://www.nytimes.com/2010/05/12/us/12interior.html?hp. Henry Petroski's To Engineer Is Human: The Role of Failure In Successful Design (Vintage, 1992) is still in print, and a good treatment of just what the title says.

Monday, May 03, 2010

Deep Problems from the Deepwater Horizon

Two weeks ago tomorrow, on Apr. 20, an explosion and fire on the oil-drilling platform Deepwater Horizon off the Louisiana Coast resulted in the presumed deaths of eleven people and the sinking of the structure two days later. Initially, it was thought that an automated device called a "blowout preventer" (BOP in petroleum-engineer speak) would shut off the high-pressure oil from the well, which is about a mile below the ocean's surface. But soon after the structure sank, oil started showing up on the surface. British Petroleum, the owner of the well, and Transocean Inc., the operator hired by BP, initially estimated that 1,000 barrels a day were leaking out. More recently the number has risen to 5,000 barrels a day, and the slick has come within nine miles of the Louisiana coastline by today (Monday morning May 3). Already the federal government has prohibited all fishing operations for the next ten days in the region, and things look like they will get worse before they get any better.

There are nearly 4,000 offshore oil rigs in the Gulf of Mexico, most of them concentrated south of Louisiana, and as long as things operate smoothly, they are out of the public consciousness despite the fact that almost a third of our domestic oil production originates there. Partly because there have been no major headline-grabbing spills in recent years, President Obama recently called for increased offshore drilling in selected areas. The Deepwater Horizon disaster has put that on hold, and threatens to turn public opinion against offshore drilling for a long time.

Out of the 4,000 or so offshore oil rigs that operate without major problems, why did the Deepwater Horizon explode and sink? And even more urgently now, why didn't the blowout preventer work? These are technical questions that will require months of investigation to answer, although computerized logs and telemetry from the platform should help considerably. The blowout preventer, a three-story-high assemblage of hydraulic equipment designed to withstand the tremendous pressures five thousand feet underwater, is a sophisticated multi-stage system that sits on top of the ocean floor and surrounds the well pipe assembly. It is essentially a large automatic shutoff valve, using hydraulic pressure to acivate guillotine-like rams or rubber-and-steel rings that impose enough counterpressure to block the several-thousand-pounds-per-square-inch pressure behind the oil emerging from the ocean floor. Normally it is activated by remote control from the platform, but before the platform sank, operators tried to activate it without success. When underwater remotely operated vehicles (ROVs) reached the BOP's control panel and flipped the control switches, nothing happened. According to online discussions, in the event of a major disaster such as the loss of the platform, stored hydraulic energy in devices called accumulators should be sufficient to make the BOP do its job. But this didn't work, for reasons that are not yet clear.

Time is now critical, but unless something on the shrinking list of things British Petroleum engineers haven't tried on the BOP works, the other options to shut off the increasing flow of oil from the well will take at least weeks, if not longer. A risky idea that has apparently never been tried at such depths involves lowering large metal cans or funnels over the leaks (there are apparently at least two in the broken and twisted riser pipe) and trying to "vacuum" up the oil that way. All sorts of complications and challenges attend this approach, from the buoyancy of oil that might literally float the cans away to the differential pressure that could crush pipes and disable suitable submersible pumps, only a few of which exist anywhere in the world. The third way, which is going to be done sooner or later in any event and is pretty likely to work, is to drill a relief well, which could be better understood as a capping well. This involves drilling sideways at some safe distance to hit the exact location of the original well in order to send mud or cement into it and stop the flow. For the relief well to work, pinpoint accuracy is required, somewhat like hitting a rain gutter on the side of a building from half a mile away. While accuracy like this can be achieved, it takes two to three months to do it. And by that time, the oil could have reached Gulf shores all the way from Louisiana to Florida.

By now, British Petroleum is the poster child of the Petroleum Industry Hall of Infamy. The Houston refinery explosion that killed about two dozen people five years ago happened largely due to BP's lax safety standards, and while it is too soon to assess BP's culpability for the initial explosion and fire on the Deepwater Horizon, which was operated in any event by Transocean, no amount of feel-good institutional advertising is going to overcome the public perception that BP is careless about safety. In the meantime, let's hope that the effort to stop the oil leak is managed safely, efficiently, effectively, and fast.

Sources: I used information from the following websites: http://en.wikipedia.org/wiki/File:Gulf_Coast_Platforms.jpg has a map of oil platforms in the Gulf, http://www.timesonline.co.uk/tol/news/world/us_and_americas/article7114487.ece is an article in The Times of London about attempts to shut off the oil flow, and attached to the photo of the ROV shutoff attempt at http://www.flickr.com/photos/uscgd8/4551846015/ is a long thread of discussion among engineers about the problems surrounding the disaster and how to shut off the well flow. Also, CNN has a good graphic of the three major approaches to shutting it off at http://www.cnn.com/2010/US/05/01/explainer.stopping.oil.leak/index.html.