Monday, February 27, 2017
General Motors is trying to do something about self-driving cars, otherwise known as autonomous vehicles. Besides their technical R&D efforts, a recent Associated Press report revealed that GM lobbyists have been busy in numerous U. S. states trying to get a particular bill passed that allegedly would protect the public from self-driving-car hazards. But what it's really supposed to do is to protect GM from having to compete with other self-driving car makers and experimenters.
This is a big deal, because decisions made at the state legislative level today could have profound implications for the development of the infant autonomous vehicle business in the U. S. for decades. I won't quite say the bill would strangle the infant in its crib, but it comes close.
What GM proposes is to allow mainly fleets of automaker-leased autonomous vehicles on the road. These would be fully self-driving machines—no operator standing by—and in targeting centrally-owned fleets, GM has singled out what is the most likely initial application of these vehicles. The first commercially deployed autonomous vehicles are operating in just such a fleet-style mode in a densely-populated section of Singapore, for example. But the kicker is that the law would require that the maker of the cars retain ownership of them, even if they were deployed only for testing purposes.
Other autonomous-vehicle promoters, including major car companies such as Ford, Volvo, and Audi, oppose the bill, saying it's an attempt to slant the playing field in favor of GM. It's also opposed by Uber, Lyft, and other organizations such as Google that don't make cars but are still interested in autonomous vehicles for various reasons.
I have to hand it to GM for their political insight, however. State legislatures tend to be pushovers for corporate-friendly laws, a tradition going back at least to the early twentieth century. For example, it's a fact that Tesla Motors cannot sell its cars directly to Texas consumers. Why not? Because way back when the first car dealerships were being established in the 1910s and 1920s, they banded together in most states and got laws passed that prohibited large, powerful auto companies from selling cars directly to consumers. In this way, a specific group of businesses got a law passed that was nominally for consumer protection but in fact was a special favor to the group. Ties between car dealers and state legislatures have been close ever since, and GM is using this continuing closeness to try to get its law passed.
The AP article cites numerous legislators who have received contributions from GM and also favor GM's legislation. It's an open secret, at least in Texas, that quid-pro-quo legislation in response to contributions (or less legal kinds of influence) goes on all the time, and so it's not surprising that GM has made considerable progress with its attempts to get its legislation passed. But now that we know about it, a consideration of the parties involved can show just how bad an idea it could be.
The groups significantly affected by this matter are: (1) the Big Three U. S. automakers (Ford, Chrysler, and GM), (2) other organizations that don't make cars but are interested in autonomous vehicles for various reasons (Google, Uber, and other experimenters and inventors), (3) state legislatures, who hold the main responsibility for laws regulating driving and drivers, (4) the federal government, which has so far mostly stayed out of the way of this matter, at least with regard to formal legislation, and (5) the car-driving or car-riding public. From the public's point of view, it makes sense to have the widest safe variety of options available for self-driving cars: partially self-driving vehicles owned by the driver/rider, wholly self-driving cars owned and operated by a fleet manager, leased self-driving cars, retrofitted self-driving cars, and whatever else can safely be done in this area to see which usage models work economically and which don't.
While the focus up to now has been mostly on the technology, astute observers have pointed out the possibility that self-driving cars could revolutionize the whole economic makeup of the auto industry. Car ownership in the future might be as quaint or peculiar-looking now as ownership of a private electric-power plant looks today, but that's the way many privately-owned homes of the rich were provided with electricity in the very early days of electric lighting. Some forecasters see visions of autonomous vehicles, like mini-buses or cabs, showing up on command at your doorstep, with everybody living in apartments without garages or parking lots. Nobody has satisfactorily explained to me where all these garageless vehicles will go at night when demand is down, but I suppose there must go someplace. At the minimum, they could park themselves in dense ranks instead of regular lots, which have to allow any-time access for any vehicle and consequently are at least 70% or so open space.
Whatever the details, GM has realized that autonomous vehicles are a potential threat to their current business model. Hence their rather clumsy attempt to get ahead of the curve with legislation that would favor automakers to the exclusion of nearly everyone else. Needless to say, if GM's proposals turn into law in every state, it will severely hamper all the other models of large-scale deployment of autonomous vehicles, but we might just be stuck with it, like we're stuck with locally-owned car dealerships to this day, whether it makes sense to sell cars that way or not.
I don't know if this matter merits a citizens' letter-writing campaign, but at the very least we should be aware that GM is trying to throw up a big legislative roadblock in the path of self-driving cars. Let's hope that state legislators all across the country do the statesman-like thing and resist the temptation to give in to one special interest at the price of inconvenience, or worse, for the general public.
Sources: The Associated Press article by Joan Lowy to which I referred was carried by numerous outlets including ABC News, where it was titled "INFLUENCE GAME: GM bill is self-driving and self-interested." It was posted on Feb. 23, 2017 at http://abcnews.go.com/Technology/wireStory/gm-pushing-driving-car-rules-undercut-competitors-45680319. I also referred to a Washington Post article on auto dealership laws at https://www.washingtonpost.com/news/wonk/wp/2013/05/14/auto-dealers-and-state-legislatures-conspire-to-make-cars-more-expensive-can-tesla-change-that/.
Monday, February 20, 2017
That's the question raised by a murder case out of Bentonville, Arkansas. On Saturday, Nov. 21, 2015, James Bates invited three friends over to watch an Arkansas Razorbacks game. The men had some drinks, and when one of them, Owen McDonald, left around 12:30 AM, Bates was still up and around.
The next morning, Bates called 911 to report that he'd found one of his guests, Victor
Collins, floating face-down, dead in Bates's hot tub. He claimed he'd gone to bed and left Collins still awake.
After checking with McDonald and looking at the physical evidence, police began to doubt Bates's story. The deck around the hot tub was wet despite near-freezing temperatures, and Collins' body had sustained numerous injuries consistent with his being strangled to unconsciousness and then put in the hot tub to drown. Bates's home was equipped with both an Amazon Echo and a smart water meter. As part of the investigation, police attempted to obtain evidence from the operators of both devices.
The Bentonville utility department was very helpful. The smart meter records hour-by-hour water usage. Up to midnight the most water used in an hour at Bates's home that fateful evening was ten gallons. But between 1 AM and 3 AM, somebody used 140 gallons, the most ever recorded by that meter in a short time. This was consistent with Bates' use of a water hose to rinse away blood, which police found traces of anyway near the hot tub.
The way the Echo works is similar to other digital assistants such as Apple's Siri. It passively listens, holding audio in a local buffer memory, until it "hears" the wake-up word—in the case of Echo, it's "Alexa." Then it sends the preceding and following minute or so of audio to the Amazon cloud, where sophisticated voice-recognition software decodes the request and does whatever the inquirer has asked, within the limits of the software, of course.
It was a long shot to begin with to hope that the Echo would have recorded to the cloud anything of relevance. The investigators were hoping that on the off chance Echo "woke up" sometime during the murder, they could obtain useful information. But they ran into a wall with Amazon, which claimed that their request was outside the bounds of what Amazon considers reasonable. All that Amazon would tell them was Bates's purchase information regarding the Echo, which has been physically taken into custody by law enforcement.
On the strength of the smart-meter evidence, Bates has been charged with the murder of Collins, but is currently out on bail awaiting trial. In the meantime, numerous privacy and electronically-stored-evidence legal experts have commented on the case, as it is one of the first to involve the relatively new digital assistants, and also one in which the company operating the device has refused to cooperate to the extent requested by law enforcement.
As David Pogue points out in a commentary in Scientific American, one can understand Amazon's reluctance to give a public impression that every Echo is a potential stool pigeon. The product has been very popular up to now, and Amazon doesn't want to do anything to dampen the law-abiding public's enthusiasm.
But it's interesting to me to note the contrast between the different attitudes that the local utility company had toward the request for information, and what Amazon has done. The fact of the matter is, once again technology has outpaced the ability of the legal system to keep up with it.
Generally speaking, the laws of evidence allow law enforcement personnel to request all sorts of information once they have obtained a valid search warrant. Telephone and text message records, Fitbit data, records of Internet searches, and even video game data have all been used as evidence in criminal cases, according to Holly Howell, a writer at the legal website Cumberland Trial Journal.
But the difference here between the smart-meter data and the Echo data is that smart meters aren't bought by consumers who have a lot of other choices about where to spend their money on smart meters, and personal assistants like Echo are. One can detect a whiff of hypocrisy in the stance of Amazon, whose profitability increasingly relies on the rich mines of data it extracts from the digital behavior of its customers, and the way it sells such data or otherwise profits from it. Admittedly, anyone who has spent any amount of time on the Internet knows that unless you take extreme precautions to prevent information mining, the websites you visit are going to share information about your searches with whoever they think might be interested, as long as the interested parties pay for it. So when you're online, you know you can easily be observed, just like in the old days of three-network TV you knew that no matter how good the show was, it would sooner or later be interrupted by a commercial. It's just something we've learned to put up with.
But doing a deliberate Internet search is one thing, and just minding your own business and going about your private life is another. I suppose if I became a bed-bound invalid I would find some use for an Echo or a Siri, but other than that I have no plans to get one. But if I did, I would be undoubtedly creeped out if I thought the thing was listening to everything I said and was relaying it to some anonymous cloud server that would do Heaven knows what with the information.
So I can see why Amazon is reluctant even to give the impression that Echo is really listening to everything you do, in any meaningful sense. But as the Internet of Things keeps advancing, both criminals and law-enforcement personnel will increasingly find uses for them—the one group in committing ingenious crimes, and the other in solving those crimes. And currently, the laws concerning what is valid evidence have to be twisted out of their traditional context to apply to at least some of these novel technologies. Asking for more laws these days is not a popular thing to do, but it does seem like there needs to be some legislation that will clarify the status and obligation of firms such as Amazon when products they sell and operate inadvertently obtain information that can be used in judicial proceedings.
The discovery process of Mr. Bates's trial begins next month, and so we'll have to wait till then to see if the police ever found anything useful on his Echo. In the meantime, if you have a personal digital assistant, watch what you say around it. It might rat on you.
Sources: I came across the Collins murder case in David Pogue's column "Your Echo is Listening," in Scientific American (March 2017), p. 28. I also referred to Holly Howell's article "Is Evidence Gathered from 'Smart' Devices the New Way to Catch Dumb Criminals?" in the Cumberland Trial Journal at http://www.cumberlandtrialjournal.com/is-evidence-gathered-from-smart-devices-the-new-way-to-catch-dumb-criminals/.
Monday, February 13, 2017
The word "chimera" originally referred to a creature in Greek mythology. It had a lion's head and body, a goat's head growing out of its back, a serpent's tail, and it breathed fire. Also, it was female. Seeing a chimera was generally regarded as a bad omen, leading to earthquakes or famines.
The other use of the word, as in "the chimera of peace in the Middle East" is to mean something that's probably never going to happen. Recent experiments at the Salk Institute and elsewhere show that while mice may be able to grow organs for transplantation into sick rats, the hope that pigs may be able to grow human organs for transplants have receded into the future, and may never be realized. What I'd like to know is, should we even be doing this stuff at all?
First, the background.
Human organ transplants from either cadavers or live donors are plagued by the problem of rejection. The body recognizes foreign tissue and mounts an attack on it, leading to complications in organ transplants such as graft-versus-host disease, which can ultimately lead to the failure of the transplanted organ and other chronic and acute problems. So medical researchers would like to develop replacement organs from the patient's own body using the patient's own adult stem cells, which can be potentially made to become a wide variety of organs. The problem is that for a desired organ such as a pancreas to develop from stem cells, it needs to be in an embryo, or an embryo-like environment that is similar enough to a human embryo to encourage the proper development and growth.
Pigs turn out to be one of the closest animals to humans physiologically, in terms of weight, size of organs, and other factors. So Juan Carlos Izpisúa Belmonte and Jun Wu of San Diego's Salk Institute inserted adult human stem cells in 2000 pig embryos and implanted the embryos in a number of female pigs. The yield wasn't very good—only 186 embryos lived as long as a month, at which time the pigs were "sacrificed." Many of the modified pig embryos were smaller than normal, and the human stem cells that survived were mostly just scattered around in the embryos rather than forming specific human organs. Wu views this setback as temporary, calling it "a technical problem that can be tackled in a targeted and rational way."
At a recent workshop sponsored by the U. S. National Institutes of Health (NIH), that federal agency reviewed the ethics of chimera research and said it would reconsider its current ban on federal funding of such work. But it is not clear now that a new administration has taken charge whether the ban will be lifted or not. The Salk Institute researchers got around the ban by using private funds for their research.
It's easy to think of arguments against chimera experiments involving human cells. The NIH people seem especially worried about the brain of a pig getting human brain cells, or the germ line (eggs and sperm) of a pig receiving human DNA. The thought that a candidate for transformation into pork chops has a family tree that includes your Uncle Jack is indeed a disquieting notion. And what about a human brain growing in a pig's body? Would that make the pig-chimera human? Obviously, to answer this question requires that one have a robust definition of what it means to be human. And it's not clear to me that the NIH has such a definition, at least not one that can be coherently defended.
If possession of a human brain is all you need to be human, U. S. law currently allows humans to be aborted up to nearly the time of birth, under some circumstances. While the question of human-chimera research does not at first seem relevant to the issue of abortion, both issues involve treating living beings as instruments of someone's will.
Bioethicist Leon Kass used the phrase "the wisdom of repugnance" to describe certain reactions that people have which cannot necessarily be articulated into finely honed arguments, but which nevertheless deserve attention. A more pungent term for the same idea is "yuck factor." If the idea of growing a human brain inside a pig's body fills us with revulsion, maybe we should pay attention to the revulsion even if we can't say exactly why we are revolted.
Proponents of animal rights are probably not rejoicing over the prospect of pigs that grow human organs either. Their reasons are different in some ways, but go back to the question of whether living beings—human or animal—should be used as instruments for another's will. There is near-universal agreement that one human should not use another as an instrument, a sentiment that goes back at least to Kant. But there is disagreement about whether humans can use other animals as instruments, a practice that also has a long-standing tradition in favor of it.
For now, the debate about human chimeras is largely still academic, as it appears that pigs are not yet a good candidate for this sort of thing. Maybe they'll try monkeys next, but that raises the same sort of issues as experiments with pigs.
God gave us human beings minds that are capable of devising plans for great good and also for great evil. Most religious traditions hold that people also have a moral sense that gives rise to such things as the yuck factor, and that we ignore this sense at our peril. It's probably a good thing that the NIH has refrained from supporting human chimera research, but obviously that hasn't stopped its progress. If someone told me I had a fatal disease that could be cured with a transplant from a specially grown pig, or monkey, I don't know what I would decide. But I'm not sure we should even contemplate asking people to make such a decision, especially if, in the process, we risk creating monstrosities who might be human and might not be. And only the chimera would know for sure.
Monday, February 06, 2017
Everybody has something or other they fear the worst. For Indiana Jones, it was snakes. Surely high on many people's lists of horrors is the fate of falling victim to "locked-in syndrome," which is often the outcome of amyotrophic lateral sclerosis, a disease that results from the death of motor neurons. Two famous sufferers from the disease were baseball player Lou Gehrig (which is why it's sometimes called "Lou Gehrig's disease") and physicist Stephen Hawking, who has survived for more than five decades after his diagnosis. Most victims die within about three or four years of diagnosis, however.
A person with locked-in syndrome can usually hear and see normally, but has lost the ability to move any voluntary muscle. Two-way communication with such people has therefore been impossible up to now, although if even a single eyelid can be moved voluntarily, such a low-data-rate channel can with patience be used to good purpose. As a recent article in the MIT Technology Review notes, in 1995 Jean-Dominique Bauby suffered a stroke leaving him locked in except for one eyelid, and he used it to dictate his memoirs. But once the last voluntary muscle nerves die, the door is shut, at least until recently.
The article reports on the work of a Swiss neurological researcher named Niels Birbaumer, who has developed a system to detect voluntary brain activity of locked-in-syndrome sufferers. The most common means of monitoring the brain is the electroencepalograph (EEG), but EEG signals are notoriously difficult to interpret in terms of actual thought processes. The most high-resolution way of measuring activity in specific parts of the brain is currently functional magnetic-resonance imaging (fMRI), which can focus in on millimeter-size locations anywhere in the brain and monitor subtle changes in blood flow which apparently correlate well with increased neuronal activity. But fMRI rigs cost many hundreds of thousands of dollars, are expensive to maintain and operate, and so are limited to a few well-funded research sites.
A relatively new technique Birbaumer has helped develop is called functional near-infrared spectroscopy (fNIRS), and can non-invasively view blood-flow changes in the outer layers of the brain at much less cost and in a more convenient way than fMRI. Instead of the need to insert the patient's head in a liquid-helium-filled machine the size of a small car, fNIRS uses a cap-like device that fits over the patient's head. The cap holds emitters and sensors of near-infrared light in the wavelength range of 700 to 800 nanometers (visible light is in the range of about 400 to 700 nm). This can be done with inexpensive solid-state components, and the outputs are digitized and analyzed for changes in blood flow. It turns out that many types of bodily substances such as muscle, skin, and bone are partially transparent to near-infrared light, and so an fNIRS system can "see" up to 4 cm beneath the surface of the skull, which is far enough to reach the outer layers of the brain.
That's far enough for Birbaumer to run a series of tests in which locked-in-syndrome sufferers learned to change their thoughts in a way that would show up on the researcher's fNIRS system. Then he asked them yes-or-no questions that the patient knew the answer to, such as "Were you born in Paris?" Based on the answers to these test questions, Birbaumer estimates that he can accurately detect the intended answer from a typical patient about 70% of the time. This is not great, but it's better than chance. Admittedly, the sample size is small (four patients), but it's a start.
What is most interesting about the study was the answers to questions that no one has been able to ask a totally locked-in person before: "Are you happy? Do you love to live?" Three patients who gave fairly reliable answers to the questions with known correct responses said yes, they were happy. Family members welcomed the news, probably the first communication they had received from their loved ones in many months.
This work is remarkable for several reasons. First, cracking the lock on locked-in syndrome would be a blessing for both patients, who must be immensely frustrated at not being able to communicate, and caregivers and loved ones, who both have and do not have the patient with them. Second, because of the relatively simple equipment needed compared to fMRI, there is reasonable hope that the technology could either be commercialized, or at least used more widely than in a few research labs for routine communications with locked-in-syndrome sufferers. Fortunately, ALS is a rare disease, occurring in about 2 out of 100,000 per year. But by the same token, its rarity makes it somewhat of an "orphan" disease, meaning that drug companies and research funders often overlook it in preference to more common diseases. Its cause is unknown except for a few cases that can be attributed to genetic factors, although it seems to be more frequent among players of professional sports.
The intersection of medical technology and economics has always been troublesome ethically. Prior to the modern era, the quality of medical care received depended mainly on wealth, although even the best physicians of the 1700s could do very little compared to the average general practitioner of today. Even in countries with government-funded single-payer healthcare systems, resources are limited, and life-or-death decisions about who gets what treatment are sometimes made by faceless bureaucrats, with sometimes dire personal consequences for those who don't make it through the approval process for treatment. Like the poor that we will always have with us, there will always be some sick people who cannot be cured, whether for reasons of economics or limited medical technology. But devices such as the "mind-reading" fNIRS system can alleviate the suffering of those whose fate is to be still in this world, but who cannot respond voluntarily to any human voice or touch.
There is still a role for charitable organizations in medicine, entities whose primary purpose is not to make money, but to succor the suffering. Perhaps such an organization will undertake to develop the Birbaumer system into something that can be used more widely by victims of locked-in syndrome, with appropriate precautions against giving false hopes that would be disappointed later. In the meantime, I hope other fNIRS researchers will follow up this promising lead and pry open the door that has been closed on locked-in people so far.
Sources: The article "Reached via a mind-reading device, deeply paralyzed patients say they want to live," by Emily Mullin appeared in the MIT Technology Review online at https://www.technologyreview.com/s/603512/reached-via-a-mind-reading-device-deeply-paralyzed-patients-say-they-want-to-live/ on Jan. 31, 2017. The research article on which the story is based is on the open-access site of PLOS Biology at http://journals.plos.org/plosbiology/article?id=10.1371/journal.pbio.1002593. I also referred to the Wikipedia articles on functional near-infrared spectroscopy and amyotrophic lateral sclerosis. I thank my wife for bringing the MIT Technology Review article to my attention.