Monday, July 27, 2015

The Wireless-Car-Hack Recall: A Real-Life Drama in Three Acts

Act One—2010-2011

As automakers begin to build in more wireless technology to enable not only hands-free mobile phone use from their cars but streaming audio services and navigational and safety aids as well, some researchers at UC San Diego and the University of Washington look into the possibility that these new two-way communication paths can be used to hack into a car's computer for nefarious purposes.  After months of work, they manage to use a wireless connection to disable the brakes on a particular car, which to this day remains anonymous.  Rather than releasing the maker's name in their research publication in 2011, the researchers suppress it, and instead go privately to the car's manufacturers and warn them of the vulnerability.  Also in 2010, more than 100 car owners in the Austin, Texas area whose vehicles were linked into a system that can disable a car if the owner gets behind in his payments, found that their cars wouldn't start.  Only, they weren't deadbeats—one of the enforcement company's employees got mad at his boss and intentionally disabled the cars. 

Act Two—2012-2013

Two freelance computer security specialists, Charlie Miller and Chris Valasek, read about the UCSD/University of Washington wireless-car-hack study and decide to investigate the issue further.  They apply for and receive an $80,000 grant from the U. S. Defense Advanced Research Projects Agency (DARPA), with which they buy a Ford Escape and a Toyota Prius.  With this hardware, they teach themselves the intricacies of the automakers' internal software and as a first step, develop a wired approach to hacking into a vehicle's control systems.  This allows them to plug a connector into the car's diagnostic port and operate virtually any system they wish.  However, when they show this ability at Defcon 2013, a hacker's convention, representatives of automakers are not impressed, pointing out that they needed a physical connection to do the hacking.  That inspires Miller and Valasek to go for the ultimate hack:  wireless Internet control of a car, and demonstration of same to a journalist.

Act Three—2014-2015

After reading dozens of mechanics' manuals and evaluating over twenty different models, the pair decide that the model most vulnerable to an online hack is the Jeep Cherokee. Miller buys one in St. Louis and the pair begin searching for bugs and vulnerabilities in software.  Finally, in June of 2015, Valasek issues a command from his home in Pittsburg and Miller watches the Cherokee respond in his driveway in St. Louis.  They have succeeded in hacking remotely into the car's CAN bus, which controls virtually all essential functions such as brakes, throttle, transmission, wipers, and so on. 

After the lukewarm reception they received from automakers a couple of years earlier, they have decided a stronger stimulus is needed to get prompt action.  When they informed Fiat Chrysler Autos of their hacking work into the firm's Cherokee back in October of 2014, the response was minimal.  Accordingly, they invite Wired journalist Andy Greenberg to drive the Cherokee on an interstate highway, telling him only in general terms that they will do the hack while he's driving, and surprise him with particular demonstrations of what they can do. 

Greenberg must have felt like he was in a bad sci-fi flick about aliens taking over.  As he recalled the ride, "Though I hadn’t touched the dashboard, the vents in the Jeep Cherokee started blasting cold air at the maximum setting, chilling the sweat on my back through the in-seat climate control system. Next the radio switched to the local hip hop station and began blaring Skee-lo at full volume. I spun the control knob left and hit the power button, to no avail. Then the windshield wipers turned on, and wiper fluid blurred the glass."  During the finale, the hackers disabled the transmission, throwing it into neutral and causing a minor backup on the interstate.

Greenberg's article appears on Wired's website on July 21.  On July 24, Fiat Chrysler Autos announces a recall of 1.4 million vehicles to fix software flaws that allow their cars to be hacked remotely via the UConnect Internet connection that Miller and Vasalek used.  It is the first recall ever due to a demonstrated flaw that lets hackers access a car through its Internet connection.

. . . Back in December of 2014, I blogged on the possibility that someone would figure out how to use the Internet to hack into a car's controls.  At the time, I reported that several automakers had formed an Information Sharing Advisory Center to pool knowledge of problems along these lines.  And I hoped that nobody would use a remote hack for unethical reasons.  What Miller and Vasalek have done has ruffled some feathers, but falls short of truly illegal activity. 

Instead, it's in the tradition of what might be called "white-hat" hacking, in which security experts pretend to be bad guys and do their darndest to hack into a system, and then let the system designers know what they've done so they can fix the bug.  According to press reports, pressure from the National Highway Traffic Safety Administration prompted Fiat Chrysler Autos to issue the hacking recall as promptly as they did, only three days after the Wired article appeared.  The annals of engineering ethics show that a little adverse publicity can go a long way in stimulating action by a large organization such as a car company. 

You might ask why Fiat Chrysler's own software engineers couldn't have done what Miller and Vasalek did, sooner and more effectively.  That is a complex question that involves the psychology of automotive engineers and what motivates them.  Budgeting for someone to come along and thwart the best efforts of your software engineers to protect a system is not a high priority in many firms.  And even if an engineer with Fiat Chrysler had concerns, chances are that his superiors would have belittled them, as they did Miller and Vasalek's demo of the wired hack in 2013.  To do anything more would have required a whistleblower to go outside the company to the media, which would have probably cost him his job. 

But this way, Miller and Vasalek get what they wanted:  real action on the part of automakers to do something about the problem.  They also become known as the two Davids who showed up the Goliath of Fiat Chrysler, and this can't do their consulting business any harm.  Best of all, millions of owners of Cherokees and other vehicles can scratch one small worry off their list:  the fear that some geek somewhere will pick their car out of a swarm on a GPS display somewhere and start messing with the radio—or worse.

Sources:  The Associated Press article on the Fiat Chrysler Auto recall appeared in many news outlets, including ABC News on July 24 at  The Wired article by Andy Greenberg describing the Cherokee hack is at  My latest previous blog on this subject appeared on Dec. 1, 2014 at

Monday, July 20, 2015

I Compute Your Pain: Emotion-Sensing Software

My wife usually knows when I'm upset long before I do.  I haven't performed a scientific study to determine how she does this.  She says she reads my body language, the tone of my voice, and my facial expressions, as well as what I say.  Women seem to have a built-in advantage when it comes to sensing the emotional states of others, so it's not a surprise that the co-founders of a company that sells software to read emotions were two women:  Rosalind Picard and Rana el Kaliouby.  The history of why they began their research into getting computers to sense emotions and what their company is doing now may tell us something about the ethical challenges to come if companies begin using emotion-reading software on a large scale.  A recent article in The New Yorker profiles these women and their work.

Back in the 1990s, almost no one in computer science was thinking professionally about emotions.  One of the few exceptions was Rosalind Picard, who has been on the MIT faculty since 1991.  She realized that computers could serve people better if they had a clue as to what emotional state a person was in.  Despite confused stares and even active discouragement from computer-science colleagues, she persisted in researching what she termed "affective computing" and wound up establishing an entirely new field.

Rana el Kaliouby entered the fray by a similar route.  Her first idea of a practical application of affective computing was to develop a kind of emotional hearing aid for autistic people, whose disability usually prevents them from inferring the emotional state of people around them.  She wound up teaming with Picard on some academic research projects, which attracted so much attention that they decided to spin off a company called Affectiva in 2009.

Once their ideas left academia for the commercial world, the tone of things changed.  Every second, thousands of marketers are competing for online attention—in TV ads, YouTube video ads, smartphone apps, and all the other electronic attention-grabbers we surround ourselves with these days.  Someone has even calculated what the attention of the average American is worth these days:  about six cents per minute in 2010.  Your attention is the coin of the realm that you exchange for "free" internet services, and companies who sell these services would dearly like to know how you feel about what you see.  This is what Affdex, the software offered by Affectiva, is supposed to do.

It works by monitoring facial expressions in a sophisticated way that uses fixed points (e. g. the tip of the nose) as references for the movement of eyebrows, the corners of your mouth, and other features that have been proven to be emotionally expressive.  The result is a readout of four emotional dimensions:  happy, confused, surprised, and disgusted.  I expect most marketers try to get the biggest happy readings, maybe laced with a little surprise here and there, and try to lower the confused and disgusted numbers.  Anyway, lots of companies are willing to pay lots of money to get these numbers.

So far, the main use has been in focus groups and other controlled settings where the consent of the consumer has been obtained to use video of their faces.  You can imagine the day when those little built-in cameras in computers and smart phones will be activated for emotion-reading software, possibly without your knowledge.  That is a thin red line which, to my own knowledge, has not been crossed yet.  But I can easily picture a situation in which a browser turns on your camera to watch your face, maybe in exchange for some bonus or free this or that, and somehow it just never gets turned off again.

Like most technology, affective computing can be used for either good or not-so-good purposes.  If software developers could learn how to sense a user's emotions, it could make software a lot easier to use.  I can think of many times when I was trying to do something with new software and got frustrated or confused.  Software that could sense this and trot out simpler and simpler explanations and help files, and even ask questions ("Just what are you trying to do?") would be a tremendous advance over that peculiar sense of helplessness I get when I face a zillion menu options and know one of them will do what I want, only I don't have twelve hours to spare in order to try each one. 

On the other hand, both Picard and Kaliouby realize that this sort of software could be abused.  Picard left Affectiva in 2013, and although Kaliouby is still with the firm, she expresses some disappointment that the commercial applications of Affdex have overshadowed the assistive applications for autism sufferers and other disabled people.  If one tries to come up with a worst-case scenario for how emotion-reading software could be abused, some sort of subliminal manipulation comes to mind.  What if emotion-reading ads prove to be well-nigh irresistible? 

Years ago, there was a flap of concern that advertisers were inserting single-frame images in TV ads that went by so quick your conscious mind didn't even notice them.  But supposedly, they went straight to your subconscious and made you go out and buy a Coke you didn't need, or something like that.  So-called subliminal advertising has proven to be useless, but we have yet to see how effective advertising is when it's coupled to software that can read the viewer's emotional state and make changes in its presentation in real time in response.  Of course, a good salesman does this instinctively, but up to now Internet advertising has been open-loop, with no way of knowing what the viewer felt about the ad.  Software such as Affdex promises to close that loop. 

Let's hope that affective software leads to a kinder, gentler interaction with the machines that take up an increasing part of our lives, without taking us down a road that amounts to secret manipulation of consumers without their knowledge or consent.

Sources:  The article "We Know How You Feel" by Raffi Khatchadourian appeared in the Jan. 19, 2015 issue of  The New Yorker, and provided many of the details in this blog.  I also referred to the Wikipedia article on Rosalind Picard and the Affectiva website 

Monday, July 13, 2015

You Can't Take It With You: Airline Security as Industrial Engineering

The first question you may have is, what's industrial engineering?  It's an uninformative name for an important discipline:  the study of how best to design industrial processes of every type, from time-and-motion studies of assembly lines to how we can treat more hospital patients better with fewer resources.  The subject came to my mind recently as I was lifting empty gray plastic tubs from a conveyor belt onto a stacking machine that automatically recycled them to the head of the line at a security checkpoint in London's Heathrow Airport.  How did I get that job?  Well, let me begin at the beginning.

This was on the return leg of a business trip to France.  Lest you think I'm a world traveler, I should say that this was my first international trip in five or six years, and I'd never been to France before.  The conference I attended was in the "department" (sort of a state) of Cantal, in the legendary South of France, which is world-famous for its wine and cheese.  These attractions were somewhat wasted on me as I don't drink wine and I don't like cheese.  But at the end of the conference, the organizers gave me a box containing selections of the local foods.  I received it eagerly with the hope that I could take it back to my wife, who is fond of cheese and has been known to have a sip of wine now and then.  What neither I nor the conference organizers reckoned on was airline security.

For those familiar with the U. S. Transportation Security Administration system, the French do it similarly except you don't have to take off your shoes.  They had set aside my bag after the X-ray, though.  They wanted to know what a dark cylindrical object was.  I took out the cardboard box of goodies and opened it up for them.  There was a can of paté of some kind.  When they saw what it was, they were happy, and let me go.  So far, so good, but next came British security in London.

At Heathrow, everything is very organized.  First you queue up and have to go through a kind of museum display of all kinds of items you can't put in your luggage.  Anything liquid or liquid-like, you have to take out and put in a clear plastic bag for them to sniff at.  I'd done this with my toothpaste and thought I was all ready for them.  Think again.

It really did look like a miniature production line when I got to the luggage X-ray system.  You are handed these large gray plastic tubs and everything has to fit in a tub.  Then this industrial-quality conveyor belt with motorized rollers lines up the tubs to go through the X-ray.  If something fishy shows up, all the operator has to do is push a button, and a set of push rods shoves the suspect tub out of the main line into a second inspect-by-hand line behind a clear plastic barrier.  To my dismay, that happened to both of my tubs of stuff.

My glasses were in one of the tubs, but I could see well enough to watch my luggage as it sat behind the barrier.  It was well back in a line of several, and so it would be a while before I could get at it again.  Just to have something to do, I started picking up the empty tubs that the more fortunate travelers were leaving behind, and stacked them on the automatic return gizmo.  While I was doing this for ten or fifteen minutes, I noticed one of the inspect-by-hand inspectors playing with what I thought at first was a back-scratcher he'd confiscated from someone.  It was a blue plastic wand about a foot and a half (50 cm) long with some white cloth thing at the end. 

Then I noticed he was wiping it over contents of a piece of luggage and taking it over to a machine.  He removed the cloth and stuck it in the machine.  Turns out it was an ion mobility spectrometry (IMS) device that uses the varying speed of ions under the influence of an electric field to detect vapors of explosives, drugs, and other non-allowed chemicals.  Very clever technology which has made it out of the lab into the field—at least, a lot of airfields.

Finally, they got to my bag.  I pulled out the notorious box and showed him the contents.  There was a glass bottle of something—maybe it was beer—and a jar of what looked like marmalade. 

"I'm sorry, sir, but these are both over 200 milliliters.  To take them with you you'll have to go back out and check this bag."  There wasn't time, so I bid my marmalade and beer, or whatever it was, good-by.  I never even got a good look at them.  Then he let me go.

All was well after that till the U. S. passport control point at the Austin airport.  At passport control, there were just a bunch of kiosks with computerized touchscreens asking you a series of questions.  One of them was about food brought back from abroad. 

I faced an ethical dilemma.  I knew if I said I had none, I'd be lying, but I could also get through quicker.  Partly just to see what happened, I answered yes, I did have food from abroad.  There was a big "A" on the slip of paper that came out, which I handed to a man at the exit.  He put a big red checkmark on it, stuffed it in a blue folder, and told me to go have a seat over there.

Over there turned out to be a waiting area with people in it who looked like they were all at a funeral.  I said, "Nobody looks very cheerful over here."  One lady griped that this was what she got for being honest. 

In a bit, a brisk gal in a uniform came up, said, "Everybody with blue folders, come with me."  I was the only one, so I followed her into a room where once again, I took out the box and showed her what was in it.  This time the offending item was a piece of dried sausage sealed in plastic wrap.  "The meat has to go, but you can keep the other stuff."  So it went, and in another minute so did I.

Based on my limited sample during my trip, I'd say the British win the industrial-engineering competition for most efficient carry-on-luggage inspection.  They also took the most stuff.  All I have left from Cantal is a bag of cookies and that can of paté.  I don't know much French, but I think the label says it has a guaranteed minimum fat content of 30%.  Paté, anyone?

Sources: You can read more about ion mobility spectrometry at the website of a manufacturer of these systems, Smiths Detection: 

Monday, July 06, 2015

Inside Out For Real: Brain Mapping and Privacy

Recently my wife and I went to see "Inside Out," the Pixar animated comedy about a girl named Riley and what her five personified emotions—Joy, Anger, Disgust, Fear, and Sadness—do in her brain when she's uprooted as her family moves from Minnesota to San Francisco.  It sounds like an unlikely premise for any kind of a movie, but Pixar pulled it off, zooming into the minds of Riley, her mother, her father, her teacher, and even a few pets for good measure. 

The idea of getting inside somebody's brain to see what's really going on makes for a good fantasy, but what if we could do it now?  And not just in laboratory settings with millions of dollars' worth of equipment, but with a machine costing only a few thousand bucks, within the budget of, say, your average police department?  If you think about it, it's not so funny anymore.

Mind-reading technology is not just around the corner, to be sure.  But what gets me thinking along these lines, besides seeing "Inside Out," is an article about some new brain-scanning technology being used by Joy Hirsch and her colleagues at the Yale Brain Function Lab. 

The biggest advance in monitoring what's going on in a living brain in recent years has been fMRI, short for functional magnetic-resonance imaging.  This technology uses an advanced form of the familiar diagnostic-type MRI machine to keep track of blood flow in different parts of the brain.  Associating more brain activity with more blood-oxygen use, fMRI technology shows different brain areas "lighting up" as various mental tasks are performed. 

While great strides in correlating mental activities with specific parts of the brain have been achieved with fMRI, the machinery is expensive, bulky, and temperamental, involving liquid-helium-cooled magnets and cutting-edge signal processing systems that confine it to a few well-equipped labs around the world.  But now Joy Hirsch has come along with a completely different technology involving nothing more complex than laser beams and a fiber-optic piece of headgear that fits on your (intact) skull like a high-tech skullcap.  From the photo accompanying the article, it looks like you don't even have to shave your head for the laser beams to go through the skull and into the top few millimeters of the brain.  While that misses some important parts, a lot goes on in the upper layers of the cerebral cortex, much of which is within reach of Dr. Hirsch's lasers.  So she has been able to do a lot of what the fMRI folks can do, only with much simpler equipment.

Don't look for a view-your-own-brain kit to show up on Amazon any time soon, but my point is that this technology is almost bound to get cheaper and better, especially now that President Obama's brain-initiative research funds are attracting more researchers into the field.  So it's worth giving some thought to what the ethical implications of cheap, easily available brain-monitoring technology would be.

Philosophers have been here before anybody else, of course, with their consideration of what is known as the "mind-body problem."  The issue is whether the mind is just a kind of folk term for what the brain really does, or whether the mind is a separate non-material entity that is intimately related to the physical thing we call the brain.  Everybody admits that no two brains are physically identical.  But what does it mean to say that two people are thinking the same thing?  Say you had two bank-robbery suspects in custody and you asked each one where they were on the night of the robbery.  If both of them happened to be robbing the bank that night, the memory of the robbery would have to reside in each of the two brains.  So at some level, the same information would have to be present in each suspect's brain. 

But can technology ever get to the point where you could actually read out memories of things like bank robberies, without the subject's consent? 

It seems like the only safe thing to say at this point is that we don't know.  It's not clear, at this early stage of brain research, that there is enough commonality among brain structures even to hope that memories can be read out in any meaningful way, even if the subject spends hours or days cooperating with researchers and telling them exactly what he or she is thinking while they gather their brain-sensing data.  And crime suspects are not likely to do that.

What we're talking about is a sort of high-tech lie detector (polygraph) test.  And frankly, lie detectors have not made huge strides in law enforcement, maybe because they simply don't work that well.  That may be because we are at the point in brain-reading technology where music broadcasting was in 1905.  The only way you could broadcast music in 1905 was over telephone lines, and while there were some limited successes in this area, the technology was simply too primitive and expensive for music broadcasting to catch on.  It had to wait for the invention of radio (wireless) in the 1920s, which launched the broadcasting industry like a rocket.

Something similar might happen with brain-reading technology if it ever gets cheap and reliable enough.  Dr. Hirsch herself speculates that some day, instead of actually painting a picture with your hands, you'd only have to think the painting, and your brain-reader connected to a laser printer would finish the job.  Any technology that could do that could certainly give a second party some insight into your thoughts, possibly against your will. 

Currently, there are safeguards against the misuse of lie-detector tests.  But if a new technology comes along that is orders of magnitude more informative than the few channels of external data provided by a polygraph, the legal system might be caught with its safeguards down.  The current research regime of institutional review boards seems to do a fairly good job of protecting the rights of research subjects in these matters.  But if law-enforcement organizations with their very different priorities ever get the technical ability to scan brains for personal information, we are going to see a very different ball game, and new rules will be needed.

If you have a chance, go see "Inside Out."  It's funny and ultimately hopeful about the human condition of having emotions that are part of us, yet not under our complete control.  The same is true of our thoughts.  If we ever develop the ability to see another person's thoughts with any degree of accuracy, the amusing fantasy of that movie may become a reality we might not want to have to deal with.

Sources:  The original story on the Yale Brain Function Lab by AP reporter Malcolm Ritter can be read on the Associated Press website at, and appeared in numerous news outlets following its initial publication on June 22.