Showing posts with label brain research. Show all posts
Showing posts with label brain research. Show all posts

Monday, February 26, 2018

Sorting Souls with fMRI


In the March issue of Scientific American, brain-imaging expert John Gabrieli says that we can now use functional magnetic-resonance-imaging (fMRI) technology to predict whether depressed patients will benefit from certain therapies, whether smokers will be able to quit, and whether criminals will land back in jail soon.  But he leaves unanswered some questions he raises—namely, if we find that we can reliably obtain this kind of information, what should we do with it?

First, a brief explanation of what fMRI does.  Using basically the same giant-liquid-helium-cooled-magnet MRI technology that hospitals use, fMRI detects changes in blood flow in the brain as certain regions become more active while the patient is thinking about or viewing different things.  For example, my niece is now a psychology postdoc in Omaha, Nebraska, doing research on troubled adolescents by putting them in an fMRI machine and having them play specially designed video games, and watching what goes on in their brains as they play.  According to Gabrieli, who is at MIT and presumably knows what he's talking about, fMRI studies have been able to discriminate between depressed patients who will benefit from cognitive behavior therapy, and those who won't.  He is somewhat short on statistics of exactly how accurate the predictions are, and admits that the technology has a way to go before it's as reliable as, say, a pregnancy test kit. 

But just for the sake of argument, suppose tomorrow we had a 95%-accurate technology that was cheap enough to be widely used (neither of which describes fMRI yet), and could tell us ahead of time the likelihood that a convicted criminal would be back in jail in five years.  What could we do with the information?

Given that one purpose of imprisonment is to protect the public, you could argue that those criminals who are very likely to commit more crimes should not be let out on the streets, at least until their fMRI scans improve.  And those whose fMRI scans showed that they were at very little risk of committing more crimes might have their sentences curtailed, or maybe we should just release them right away. 

Say you are a member of a parole board, trying to decide which prisoners should be granted parole.  Wouldn't you be glad to have fMRI data on the prisoners that was shown scientifically to be pretty accurate, and wouldn't you feel more confident in your decisions if you based them partly or even mostly on the fMRI predictions?  I think I would.

But what does this look like from the prisoner's point of view?  Suppose you led a life of crime and didn't change your ways until you landed in jail, when you came to yourself and turned over a new leaf.  (It happens.)  You present your sterling behavior record since then to the parole board, but then they make you stick your head in a machine, and the machine says your anterior cingulate cortex is just as unreformed as ever, and the board denies your request for parole.  Wouldn't you feel unfairly treated?  I think I would.

What's going on here is a conflict between two anthropologies, or models of what a human being is.  The psychologists who use fMRI studies to predict behavior emphasize that people are physical structures that work in certain ways.  And they have found strong correlations between certain brain activities and subsequent behavior.  They say, "People with this kind of fMRI profile tend to do that," and they have statistics to back up their claims.  While they admit there are such things as ethical considerations, they spend most of their time thinking of their subjects as elaborate machines, and trying to figure out how the machine works based on what they can see it doing in an fMRI scan.  If you asked Dr. Gabrieli if he believes in free will, he might laugh, or say yes or no, but he would probably regard the question as irrelevant to what he's doing.

The question of free will is crucial to a different model of the human being, the one that claims people have rational souls.  From William James on, the discipline of psychology has tended to dispense with the concept of the soul, but that doesn't change the fact that each of us has one.  I once knew a man who was a former drug user.  Then he became a Christian, settled down, started his own small business, married, and was leading a stable upstanding life the last time I heard of him.  I don't know this for a fact, but I suspect his anterior cingulate cortex would send an fMRI machine off the charts.  Nevertheless, by what psychologists might call strength of will, and by what believers would call the grace of God, he overcame his almost irrepressible desires to do bad things and developed new good habits. 

We once thought it was reasonable to discriminate against people simply because of the color of their skin.  Black people couldn't intermarry with white people, couldn't hold certain jobs, and were (and sometimes still are, regrettably) automatically considered to be the most likely suspects in any criminal investigation.  We now know this kind of discrimination is wrong.

But if fMRI machines, or their cheaper successors, ever attain the accuracy that Dr. Gabrieli hopes for, we will face a choice just as momentous as that faced by the nation when Dr. Martin Luther King challenged the nation with his dream in 1963.  Will we decide to sort people into rigid categories based on physical characteristics?  Or will we treat each human being as fully human, each fully deserving the right and opportunity to change and make better decisions regardless of what an imperfect scientific study says?  Those are the kinds of questions that we need to face before we inadvertently create a nightmarish regime in which your rights depend on the physical characteristics of your brain, just as much as they depended on the color of your skin in 1950.

Sources:  John Gabrieli's article "A Look Within" appeared on pp. 54-59 of the March 2019 edition of Scientific American.

Monday, July 06, 2015

Inside Out For Real: Brain Mapping and Privacy


Recently my wife and I went to see "Inside Out," the Pixar animated comedy about a girl named Riley and what her five personified emotions—Joy, Anger, Disgust, Fear, and Sadness—do in her brain when she's uprooted as her family moves from Minnesota to San Francisco.  It sounds like an unlikely premise for any kind of a movie, but Pixar pulled it off, zooming into the minds of Riley, her mother, her father, her teacher, and even a few pets for good measure. 

The idea of getting inside somebody's brain to see what's really going on makes for a good fantasy, but what if we could do it now?  And not just in laboratory settings with millions of dollars' worth of equipment, but with a machine costing only a few thousand bucks, within the budget of, say, your average police department?  If you think about it, it's not so funny anymore.

Mind-reading technology is not just around the corner, to be sure.  But what gets me thinking along these lines, besides seeing "Inside Out," is an article about some new brain-scanning technology being used by Joy Hirsch and her colleagues at the Yale Brain Function Lab. 

The biggest advance in monitoring what's going on in a living brain in recent years has been fMRI, short for functional magnetic-resonance imaging.  This technology uses an advanced form of the familiar diagnostic-type MRI machine to keep track of blood flow in different parts of the brain.  Associating more brain activity with more blood-oxygen use, fMRI technology shows different brain areas "lighting up" as various mental tasks are performed. 

While great strides in correlating mental activities with specific parts of the brain have been achieved with fMRI, the machinery is expensive, bulky, and temperamental, involving liquid-helium-cooled magnets and cutting-edge signal processing systems that confine it to a few well-equipped labs around the world.  But now Joy Hirsch has come along with a completely different technology involving nothing more complex than laser beams and a fiber-optic piece of headgear that fits on your (intact) skull like a high-tech skullcap.  From the photo accompanying the article, it looks like you don't even have to shave your head for the laser beams to go through the skull and into the top few millimeters of the brain.  While that misses some important parts, a lot goes on in the upper layers of the cerebral cortex, much of which is within reach of Dr. Hirsch's lasers.  So she has been able to do a lot of what the fMRI folks can do, only with much simpler equipment.

Don't look for a view-your-own-brain kit to show up on Amazon any time soon, but my point is that this technology is almost bound to get cheaper and better, especially now that President Obama's brain-initiative research funds are attracting more researchers into the field.  So it's worth giving some thought to what the ethical implications of cheap, easily available brain-monitoring technology would be.

Philosophers have been here before anybody else, of course, with their consideration of what is known as the "mind-body problem."  The issue is whether the mind is just a kind of folk term for what the brain really does, or whether the mind is a separate non-material entity that is intimately related to the physical thing we call the brain.  Everybody admits that no two brains are physically identical.  But what does it mean to say that two people are thinking the same thing?  Say you had two bank-robbery suspects in custody and you asked each one where they were on the night of the robbery.  If both of them happened to be robbing the bank that night, the memory of the robbery would have to reside in each of the two brains.  So at some level, the same information would have to be present in each suspect's brain. 

But can technology ever get to the point where you could actually read out memories of things like bank robberies, without the subject's consent? 

It seems like the only safe thing to say at this point is that we don't know.  It's not clear, at this early stage of brain research, that there is enough commonality among brain structures even to hope that memories can be read out in any meaningful way, even if the subject spends hours or days cooperating with researchers and telling them exactly what he or she is thinking while they gather their brain-sensing data.  And crime suspects are not likely to do that.

What we're talking about is a sort of high-tech lie detector (polygraph) test.  And frankly, lie detectors have not made huge strides in law enforcement, maybe because they simply don't work that well.  That may be because we are at the point in brain-reading technology where music broadcasting was in 1905.  The only way you could broadcast music in 1905 was over telephone lines, and while there were some limited successes in this area, the technology was simply too primitive and expensive for music broadcasting to catch on.  It had to wait for the invention of radio (wireless) in the 1920s, which launched the broadcasting industry like a rocket.

Something similar might happen with brain-reading technology if it ever gets cheap and reliable enough.  Dr. Hirsch herself speculates that some day, instead of actually painting a picture with your hands, you'd only have to think the painting, and your brain-reader connected to a laser printer would finish the job.  Any technology that could do that could certainly give a second party some insight into your thoughts, possibly against your will. 

Currently, there are safeguards against the misuse of lie-detector tests.  But if a new technology comes along that is orders of magnitude more informative than the few channels of external data provided by a polygraph, the legal system might be caught with its safeguards down.  The current research regime of institutional review boards seems to do a fairly good job of protecting the rights of research subjects in these matters.  But if law-enforcement organizations with their very different priorities ever get the technical ability to scan brains for personal information, we are going to see a very different ball game, and new rules will be needed.

If you have a chance, go see "Inside Out."  It's funny and ultimately hopeful about the human condition of having emotions that are part of us, yet not under our complete control.  The same is true of our thoughts.  If we ever develop the ability to see another person's thoughts with any degree of accuracy, the amusing fantasy of that movie may become a reality we might not want to have to deal with.

Sources:  The original story on the Yale Brain Function Lab by AP reporter Malcolm Ritter can be read on the Associated Press website at http://bigstory.ap.org/article/30381f2e2d8d4f31a534e0ea1c5d067c/lasers-magnetism-allow-glimpses-human-brain-work, and appeared in numerous news outlets following its initial publication on June 22. 

Monday, March 04, 2013

Obama’s Brain Project: A Hall of Mirrors?


One of the famous line drawings of the artist M. C. Escher portrays a realistically drawn hand holding a pencil.  The line drawn by the pen turns out to be the cuff of a shirt sleeve, from which emerges a second hand. . . which grows out of the paper somehow and holds a pencil, whose line is the cuff of a shirt sleeve, from which emerges the first hand.  Escher’s “Drawing Hands” came to mind when I read of a planned initiative by the Obama administration to promote a decade-long project to map the human brain.

Officially, the project is still under wraps until the President announces his budget priorities later this month.  But according to a New York Times report by John Markoff, plans include increased federal funding for neurological research directed at mapping increasingly complex brains, ranging from those of a fruit fly up to the world’s smallest mammal, a type of shrew.  But the ultimate goal is to learn how essentially every neuron in the human brain is connected, and how the whole thing works:  a wiring diagram of the brain, if you will.  Hopes are that such knowledge could lead to new therapies for presently incurable brain disorders such as Alzheimer’s disease and other forms of dementia.

Inevitably, this project has been compared to the Human Genome Project, which was completed about a decade ago at a cost of under $4 billion.  Some estimates say that the information gained from that project has returned up to $140 for every dollar spent.  Aside from the purely economic results, the mapping of the human genome was a landmark scientific achievement in its own right, which has led to further questions and discoveries in an already burgeoning field.

Does the human-brain mapping project hold the same amount of promise, either economically or scientifically?  The first question that should be asked is, “Can it work?”  And some scientists are already voicing doubts.

Markoff quotes neuroscientist Donald G. Stein as saying, “I believe the scientific paradigm underlying this mapping project is, at best, out of date and at worst, simply wrong.”  Apparently, the old analogies of the brain as a massive kind of telephone switchboard, or even a “wet computer,” fail to capture essential aspects of an organism which can develop new neurons in response to external stimuli, and has recently proved to be much more plastic than earlier theories supposed.  To the extent that the project imposes an outdated brain model on researchers, it will not succeed.  But every researcher knows that what you say you are going to do in order to get research money, is not necessarily the thing you actually end up doing, so this concern is probably not as great as you might think.

What is of greater concern now is the question of basic feasibility.  When Dr. Rafael Yuste of Columbia University was asked at a September 2011 conference about what he would really like to be able to do with the brain, he replied, “I want to be able to record from every neuron in the brain at the same time.”  Simply storing the data that would result from such an instrument is a brain-boggling proposition.  One estimate is that you would need the data-storage equivalent of about 600 million hard drives the size of the one on my personal computer (500 gigabytes) to store all the neurological activity that goes on in only one brain for a year.  The next time you say “nothing’s on your mind,” think about that.

Of course, data storage has been getting more efficient for decades, and it will probably continue to do so for a while.  But storing the data is nowhere near as hard as obtaining it in the first place.  Right now, the only way to monitor individual brain neurons is to connect wires to them, which requires opening the skull.  There are various means to monitor the brain non-invasively, but at present they have a fairly poor resolution, on the order of a millimeter at best.  And there are thousands of neurons in each cubic millimeter of brain.  Futuristic plans to send molecule-size data recorders into the brain and record the results on DNA are still purely drawing-board notions, and it is not clear they will ever work.

When the Human Genome Project began, we knew that DNA sequencing was possible—it was just very slow and tedious.  Rapid advances in technology enabled the project to finish ahead of schedule.  It is by no means clear that massive monitoring of individual brain neurons is even theoretically possible.  And unmentioned so far is the question brought up by the Escher drawing:  can the brain really understand itself?  In particular, what would happen if Dr. Yuste gets his wish and one day he sits down at a computer monitor that shows him the output of his own brain in some meaningful way.  If you’ve ever pointed a TV camera at a monitor showing the camera’s own field of view, you have seen some weird patterns show up.  It’s not pleasant to contemplate what it might mean for your own brain to watch itself in action.

As with any great leap in scientific knowledge these days, the rationale for it is that it may lead to practical benefits such as cures for diseases like Alzheimer’s and autism.  While we can’t discount these possibilities, neither can we discount the notion that once it’s possible to exhaustively monitor the activity of the human brain, it may be possible to read thoughts in a way that would amount to the ultimate invasion of privacy.  At the very least, this possibility raises concerns that should be taken seriously.  So far, everyone whose brain has been monitored has given consent to the process, we hope.  But the molecule-size brain monitors could be delivered without the patient’s knowledge or consent. 

So far, this kind of thing is in the realm of science fiction rather than fact.  But before it becomes fact, let’s hope that we have a full public discussion of the potential downsides as well as the benefits of a map of the human brain, assuming such a thing is even possible.

Sources:  John Markoff’s article “Obama Seeking to Boost Study of Human Brain,” appeared in the online edition of the New York Times on Feb. 17, 2013, at http://www.nytimes.com/2013/02/18/science/project-seeks-to-build-map-of-human-brain.html.  He followed it with an analysis piece on the same subject on Feb. 24 at http://www.nytimes.com/2013/02/26/science/proposed-brain-mapping-project-faces-significant-hurdles.html.  I relied on both of these pieces for this article.  The M. C. Escher work “Drawing Hands” can be viewed at http://kafee.files.wordpress.com/2009/10/drawing_hands.jpg