Showing posts with label fMRI. Show all posts
Showing posts with label fMRI. Show all posts

Monday, February 26, 2018

Sorting Souls with fMRI


In the March issue of Scientific American, brain-imaging expert John Gabrieli says that we can now use functional magnetic-resonance-imaging (fMRI) technology to predict whether depressed patients will benefit from certain therapies, whether smokers will be able to quit, and whether criminals will land back in jail soon.  But he leaves unanswered some questions he raises—namely, if we find that we can reliably obtain this kind of information, what should we do with it?

First, a brief explanation of what fMRI does.  Using basically the same giant-liquid-helium-cooled-magnet MRI technology that hospitals use, fMRI detects changes in blood flow in the brain as certain regions become more active while the patient is thinking about or viewing different things.  For example, my niece is now a psychology postdoc in Omaha, Nebraska, doing research on troubled adolescents by putting them in an fMRI machine and having them play specially designed video games, and watching what goes on in their brains as they play.  According to Gabrieli, who is at MIT and presumably knows what he's talking about, fMRI studies have been able to discriminate between depressed patients who will benefit from cognitive behavior therapy, and those who won't.  He is somewhat short on statistics of exactly how accurate the predictions are, and admits that the technology has a way to go before it's as reliable as, say, a pregnancy test kit. 

But just for the sake of argument, suppose tomorrow we had a 95%-accurate technology that was cheap enough to be widely used (neither of which describes fMRI yet), and could tell us ahead of time the likelihood that a convicted criminal would be back in jail in five years.  What could we do with the information?

Given that one purpose of imprisonment is to protect the public, you could argue that those criminals who are very likely to commit more crimes should not be let out on the streets, at least until their fMRI scans improve.  And those whose fMRI scans showed that they were at very little risk of committing more crimes might have their sentences curtailed, or maybe we should just release them right away. 

Say you are a member of a parole board, trying to decide which prisoners should be granted parole.  Wouldn't you be glad to have fMRI data on the prisoners that was shown scientifically to be pretty accurate, and wouldn't you feel more confident in your decisions if you based them partly or even mostly on the fMRI predictions?  I think I would.

But what does this look like from the prisoner's point of view?  Suppose you led a life of crime and didn't change your ways until you landed in jail, when you came to yourself and turned over a new leaf.  (It happens.)  You present your sterling behavior record since then to the parole board, but then they make you stick your head in a machine, and the machine says your anterior cingulate cortex is just as unreformed as ever, and the board denies your request for parole.  Wouldn't you feel unfairly treated?  I think I would.

What's going on here is a conflict between two anthropologies, or models of what a human being is.  The psychologists who use fMRI studies to predict behavior emphasize that people are physical structures that work in certain ways.  And they have found strong correlations between certain brain activities and subsequent behavior.  They say, "People with this kind of fMRI profile tend to do that," and they have statistics to back up their claims.  While they admit there are such things as ethical considerations, they spend most of their time thinking of their subjects as elaborate machines, and trying to figure out how the machine works based on what they can see it doing in an fMRI scan.  If you asked Dr. Gabrieli if he believes in free will, he might laugh, or say yes or no, but he would probably regard the question as irrelevant to what he's doing.

The question of free will is crucial to a different model of the human being, the one that claims people have rational souls.  From William James on, the discipline of psychology has tended to dispense with the concept of the soul, but that doesn't change the fact that each of us has one.  I once knew a man who was a former drug user.  Then he became a Christian, settled down, started his own small business, married, and was leading a stable upstanding life the last time I heard of him.  I don't know this for a fact, but I suspect his anterior cingulate cortex would send an fMRI machine off the charts.  Nevertheless, by what psychologists might call strength of will, and by what believers would call the grace of God, he overcame his almost irrepressible desires to do bad things and developed new good habits. 

We once thought it was reasonable to discriminate against people simply because of the color of their skin.  Black people couldn't intermarry with white people, couldn't hold certain jobs, and were (and sometimes still are, regrettably) automatically considered to be the most likely suspects in any criminal investigation.  We now know this kind of discrimination is wrong.

But if fMRI machines, or their cheaper successors, ever attain the accuracy that Dr. Gabrieli hopes for, we will face a choice just as momentous as that faced by the nation when Dr. Martin Luther King challenged the nation with his dream in 1963.  Will we decide to sort people into rigid categories based on physical characteristics?  Or will we treat each human being as fully human, each fully deserving the right and opportunity to change and make better decisions regardless of what an imperfect scientific study says?  Those are the kinds of questions that we need to face before we inadvertently create a nightmarish regime in which your rights depend on the physical characteristics of your brain, just as much as they depended on the color of your skin in 1950.

Sources:  John Gabrieli's article "A Look Within" appeared on pp. 54-59 of the March 2019 edition of Scientific American.

Monday, February 06, 2017

Is fNIRS the Key to Locked-In Syndrome?


Everybody has something or other they fear the worst.  For Indiana Jones, it was snakes.  Surely high on many people's lists of horrors is the fate of falling victim to "locked-in syndrome," which is often the outcome of amyotrophic lateral sclerosis, a disease that results from the death of motor neurons.  Two famous sufferers from the disease were baseball player Lou Gehrig (which is why it's sometimes called "Lou Gehrig's disease") and physicist Stephen Hawking, who has survived for more than five decades after his diagnosis.  Most victims die within about three or four years of diagnosis, however.

A person with locked-in syndrome can usually hear and see normally, but has lost the ability to move any voluntary muscle.  Two-way communication with such people has therefore been impossible up to now, although if even a single eyelid can be moved voluntarily, such a low-data-rate channel can with patience be used to good purpose.  As a recent article in the MIT Technology Review notes, in 1995 Jean-Dominique Bauby suffered a stroke leaving him locked in except for one eyelid, and he used it to dictate his memoirs.  But once the last voluntary muscle nerves die, the door is shut, at least until recently.

The article reports on the work of a Swiss neurological researcher named Niels Birbaumer, who has developed a system to detect voluntary brain activity of locked-in-syndrome sufferers.  The most common means of monitoring the brain is the electroencepalograph (EEG), but EEG signals are notoriously difficult to interpret in terms of actual thought processes.  The most high-resolution way of measuring activity in specific parts of the brain is currently functional magnetic-resonance imaging (fMRI), which can focus in on millimeter-size locations anywhere in the brain and monitor subtle changes in blood flow which apparently correlate well with increased neuronal activity.  But fMRI rigs cost many hundreds of thousands of dollars, are expensive to maintain and operate, and so are limited to a few well-funded research sites.

A relatively new technique Birbaumer has helped develop is called functional near-infrared spectroscopy (fNIRS), and can non-invasively view blood-flow changes in the outer layers of the brain at much less cost and in a more convenient way than fMRI.  Instead of the need to insert the patient's head in a liquid-helium-filled machine the size of a small car, fNIRS uses a cap-like device that fits over the patient's head.  The cap holds emitters and sensors of near-infrared light in the wavelength range of 700 to 800 nanometers (visible light is in the range of about 400 to 700 nm).  This can be done with inexpensive solid-state components, and the outputs are digitized and analyzed for changes in blood flow.  It turns out that many types of bodily substances such as muscle, skin, and bone are partially transparent to near-infrared light, and so an fNIRS system can "see" up to 4 cm beneath the surface of the skull, which is far enough to reach the outer layers of the brain. 

That's far enough for Birbaumer to run a series of tests in which locked-in-syndrome sufferers learned to change their thoughts in a way that would show up on the researcher's fNIRS system.  Then he asked them yes-or-no questions that the patient knew the answer to, such as "Were you born in Paris?" Based on the answers to these test questions, Birbaumer estimates that he can accurately detect the intended answer from a typical patient about 70% of the time.  This is not great, but it's better than chance.  Admittedly, the sample size is small (four patients), but it's a start.

What is most interesting about the study was the answers to questions that no one has been able to ask a totally locked-in person before:  "Are you happy?  Do you love to live?"  Three patients who gave fairly reliable answers to the questions with known correct responses said yes, they were happy.  Family members welcomed the news, probably the first communication they had received from their loved ones in many months.

This work is remarkable for several reasons.  First, cracking the lock on locked-in syndrome would be a blessing for both patients, who must be immensely frustrated at not being able to communicate, and caregivers and loved ones, who both have and do not have the patient with them.  Second, because of the relatively simple equipment needed compared to fMRI, there is reasonable hope that the technology could either be commercialized, or at least used more widely than in a few research labs for routine communications with locked-in-syndrome sufferers.  Fortunately, ALS is a rare disease, occurring in about 2 out of 100,000 per year.  But by the same token, its rarity makes it somewhat of an "orphan" disease, meaning that drug companies and research funders often overlook it in preference to more common diseases.  Its cause is unknown except for a few cases that can be attributed to genetic factors, although it seems to be more frequent among players of professional sports. 

The intersection of medical technology and economics has always been troublesome ethically.  Prior to the modern era, the quality of medical care received depended mainly on wealth, although even the best physicians of the 1700s could do very little compared to the average general practitioner of today.  Even in countries with government-funded single-payer healthcare systems, resources are limited, and life-or-death decisions about who gets what treatment are sometimes made by faceless bureaucrats, with sometimes dire personal consequences for those who don't make it through the approval process for treatment.  Like the poor that we will always have with us, there will always be some sick people who cannot be cured, whether for reasons of economics or limited medical technology.  But devices such as the "mind-reading" fNIRS system can alleviate the suffering of those whose fate is to be still in this world, but who cannot respond voluntarily to any human voice or touch.

There is still a role for charitable organizations in medicine, entities whose primary purpose is not to make money, but to succor the suffering.  Perhaps such an organization will undertake to develop the Birbaumer system into something that can be used more widely by victims of locked-in syndrome, with appropriate precautions against giving false hopes that would be disappointed later.  In the meantime, I hope other fNIRS researchers will follow up this promising lead and pry open the door that has been closed on locked-in people so far.

Sources:  The article "Reached via a mind-reading device, deeply paralyzed patients say they want to live," by Emily Mullin appeared in the MIT Technology Review  online at https://www.technologyreview.com/s/603512/reached-via-a-mind-reading-device-deeply-paralyzed-patients-say-they-want-to-live/ on Jan. 31, 2017.  The research article on which the story is based is on the open-access site of PLOS Biology at http://journals.plos.org/plosbiology/article?id=10.1371/journal.pbio.1002593I also referred to the Wikipedia articles on functional near-infrared spectroscopy and amyotrophic lateral sclerosis.  I thank my wife for bringing the MIT Technology Review article to my attention.

Monday, November 26, 2012

Right and Wrong: All In Our Brains?


When I was teaching an engineering ethics module for a few semesters, one of the first things I asked the students to do was to spend five minutes writing an answer to this question: “How do you tell the difference between right and wrong conduct?”  The responses usually fell into three categories.

Many people would say that they rely on what amounts to intuition, a “gut feeling” that a course of action is right or wrong.  Nearly as popular was the response that they look to how other people would act in similar circumstances.  Very rarely, a student would say that he relies on religious guidelines such as the Ten Commandments or the Quran.  These results are consistent with a claim by neuroscientist Josh Greene that many of our moral decisions are guided, if not determined, by the way our brains are wired.  But we can rise above our instinctive moral habits by consciously thinking about our ethical dilemmas and applying reason to them.

An article in Discover magazine outlines the research Greene and others have done with sophisticated brain imaging techniques such as fMRI (functional magnetic resonance imagining), which indicates spots in the brain that get more active when the mind is engaged in certain activities.  Greene finds that routine ethical choices such as whether to get up in the morning are handled by lower-level parts of the brain that we share with other less-developed animals.  But when he poses hard ethical dilemmas to people, it is clear that the more sophisticated reasoning areas of the brain go to work to deal with reasoning about probabilities, for example, as well as the parts that control simpler instinctive actions.

One of the ethical dilemmas Greene uses is a form of the “trolley problem” conceived by some philosophers as a test of our ethical reasoning abilities.  As Philippa  Foot posed the problem in 1967, you are asked to assume that you are the driver of a tram or trolley that is out of control, and the only choice of action you have is which track to follow at an upcoming switch.  There is one man working on the section of track following one branch of the switch, and five men on the other branch.  Which branch do you choose, given that someone is going to be killed either way?

Greene has found that these and similar hard-choice ethical problems cause the brain to light up in objectively different ways than it does when a simpler question is posed such as, “Is it right to kill an innocent person?”  Whether or not these findings make a difference in how you approach ethical decision-making depends on things that go much deeper than Greene’s experiments with brain analysis.

But first, let me agree with Greene when he says that the world’s increasing complexity means that we often have to take more thought than we are used to when making ethical decisions.  One reason I favor formal instruction in engineering ethics is that the typical gut-reaction or peer-pressure methods of ethical decision-making that many students use coming into an ethics class, are not adequate when the students find themselves dealing after graduation with complex organizations, multiple parties affected by engineering decisions, and complicated technology that can be used in a huge number of different ways.  Instinct is a poor guide in such situations, and that is why I encourage students to learn basic steps of ethical analysis so that they are at least prepared to think about such situations with at least as much brain power as they would use to solve a technical problem.  This is a novel idea to most of them, but it’s necessary in today’s complex engineering world.

That being said, I believe Greene, and many others who take a materialist view of the human person, are leaving out an essential fact about moral reasoning and the brain.  The reigning assumption made by most neuroscientists is that the self-conscious thing we call the mind is simply a superficial effect of what is really going on in the brain.  Once we figure out how the brain works, they believe, we will also understand how the mind works.  While it is important to study the brain, I am convinced that the mind is a non-material entity which uses the brain, but is not reducible to the brain.  And I also believe we cannot base moral decisions upon pure reason, because reason always has to start somewhere.  And where you start has an immense influence on where you end up.

As a Christian supernaturalist, I maintain that God has put into every rational person’s heart a copy, if you will, of the natural laws of morality.  This is largely, but not exclusively, what Greene and other neuroscientists would refer to as instinctive moral inclinations, and they would trace them back to the brain structures they claim were devised by evolution to cope with the simpler days our ancestors lived in.  (If they really think ancient times were simpler, try living in the jungle by your wits for a week and see how simple it is.)  God has also made man the one rational animal, giving him the ability to reason and think, and God intends us to use our minds to make the best possible ethical decisions in keeping with what we know about God and His revealed truth.  This is a very different approach to ethics from the secular neuroscience view, but I am trying to make vividly clear what the differences are in our respective foundational beliefs.

So both Greene and I think there are moral decisions that can be made instinctively, and those that require higher thought processes.  But what those higher thought processes use, and the assumptions they start from, are very different in the two cases.  I applaud Greene for the insights he and his fellow scientists have obtained about how the mind uses the brain to reach moral decisions.  But I radically disagree with him about what the outcomes of some of those decisions should be, and about the very nature of the mind itself.

Sources:  The Discovery magazine online version of the article on Josh Greene’s research can be found in the July-August 2011 edition at http://discovermagazine.com/2011/jul-aug/12-vexing-mental-conflict-called-morality.  I also referred to the Wikipedia article on “Trolley problem.”