This month's Scientific American carries an article by Joe Z. Tsien about reading the brain's "neural code": the patterns of nerve activity that go on when we remember or think about something. Although most of Dr. Tsien's work has been with mice, he has been able to transform the seemingly random pattern of nerve firings into binary code that tells him what the mouse has been doing and where. Admittedly, the range of mouse activities—nesting, falling in a specially designed mouse elevator, and experiencing a miniature mouse earthquake—falls a little short of human experience. But hey, you have to start somewhere.
Or do you? Many have seen farther down this road a threat to the final bastion of independence: the freedom of thought. At the end of his article Dr. Tsien speculates that "in 5,000 years" we might be able to download our minds onto computers, with all the potential for control and exploitation that this entails. He is more conservative than the inventor Ray Kurzweil, who in his book The Singularity Is Near estimates that "the end of the 2030s is a conservative projection for successful [brain] uploading." Interesting that the same process—transferring a human brain's contents to a machine—Dr. Tsien calls downloading, and Kurzweil calls uploading. Perhaps unconsciously, this may express their respective attitudes to the order which is appropriate to the two objects. Which is higher, computers or brains?
Computers and brains are also involved in a recent New Yorker magazine article by Margaret Talbot. The hero (or villain, depending on your point of view) in her piece is Joel Huizenga, founder of a company called No Lie MRI. Really. Huizenga claims that an advanced brain-imaging technique called functional MRI (fMRI for short) is the key to figuring out whether a person is lying. The technique works by tracing the oxygen consumption of various locations in the brain. Since more active parts presumably take up more oxygen, this allows fMRI users to discern different locations of brain activity with a resolution of a few millimeters or less (as long as the patient doesn't turn his head or move his tongue too much during the scan). Huizenga has run some tests in which subjects were asked to lie sometimes and tell the truth other times, and claims his technology is much better than the old polygraph machines that rely on such mundane things as heart rate, breathing rate, and the sweatiness of one's palms. Talbot reports that "neuroethicists" are already up in arms about the threat posed to privacy and freedom by the potential misuse of such technology.
The amusing thing is that nowhere in these articles does anyone mention the fact that when someone brings the machinery of science and technology to bear on the human mind and the question of truth, it is like trying to use an X-ray machine on your checkbook to figure out your bank balance if you've done the math wrong. A bank balance is a non-material entity. Yes, it's recorded in various places—the bank's computer memory chips and discs, your checkbook if you've kept it right, and so on. But without people around to agree on what a bank balance is in the first place and what numbers represent yours in particular, those black marks on paper or magnetized regions on a hard drive somewhere are just random features of the material universe.
Despite materialistic arguments to the contrary, the human mind is a fundamentally different thing from the human brain. In most peoples' experience, the physical brain is needed for the mind to manifest itself in the material world. But there are respectable philosophical arguments (too lengthy to repeat here) that say the certain features of the mind—namely, the validity of reason—show that matter can't be all there is. Truth, if it exists at all (and there are some dangerous types out there who claim it doesn't), must exist in what philosophers call the metaphysical realm, beyond the physical one that is directly sensible.
This is why attempts to develop a technological test for truth, as one would test for diabetes or AIDS, are doomed to fall short of the 100% reliability criterion that would make them justifiable for widespread use. Even if there is a part of the brain that telling a lie activates in many people, there are so-called pathological liars to whom what we would call a lie appears to be the truth. A delusional person will maintain with the greatest calmness and peace of mind that he is a fried egg, no matter how often you show him his appearance in the mirror and how badly he must have been fried to look like that. And any lie-detector test that relied on subconscious unease or cognitive dissonance to detect lies would fail to register the lie when such a person says he's a fried egg. For all the machine could tell, he IS a fried egg.
Most courts have wisely refrained from admitting lie-detector tests as direct evidence of guilt, although they can be used in a secondary way to assist in exoneration on a voluntary basis. While brain research is fascinating and may lead to cures for neurological conditions like Alzheimer's disease, the science-fiction prospect of a kind of "omniscience machine" that you could point at any passerby to read his innermost thoughts or secrets is likely to remain science fiction for centuries, if not forever. For one thing, all such systems initially have to have the cooperation of the subject, especially when the issues being explored are unique to that subject. Both conventional lie detectors and No Lie MRI's system work only to the extent that a subject manifests typical physiological responses to lying. If the information being sought becomes more specific, such as "Where were you on the night of the 19th?", a particular brain's neuronal patterns form an uncrackable code-book-type code, as far as I can tell. And the only way to crack it would be to interview the subject beforehand on the matters at issue, with the subject's full cooperation, in order to establish what the code is. In the case of unwilling subjects, this cooperation is hardly likely to be forthcoming.
So although people interested in engineering ethics ought to keep a watchful eye on brain research, the antics of outfits such as No Lie MRI probably pose more danger to the pocketbooks of investors than to the freedom or privacy of the public at large. That is, unless we convince ourselves that they work even if they don't. And that is a metaphysical problem for another day.
Sources: The July 2007 issue of Scientific American carries Dr. Tsien's article on pp. 52-59. Margaret Talbot's article "Duped" begins on p. 52 of the July 2, 2007 issue of The New Yorker. Ray Kurzweil's prediction of brain uploading by 2040 can be found on p. 200 of The Singularity Is Near (Viking, 2005). For arguments that the mind's reasoning ability points to something beyond materialism, see Victor Reppert, C. S. Lewis's Dangerous Idea (IVP, 2003).
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment