Google’s parent company
Alphabet has recently demonstrated an artificial-intelligence (AI) algorithm
that can be used to estimate how likely hospital patients are to die soon. A recent piece in Bloomberg News described how one particular woman with end-stage
breast cancer arrived with fluid-filled lungs and underwent numerous
tests. The Google algorithm said there
was a 20% chance she would not survive her hospital stay, and she died a few
days later.
One data point—or one
life. The woman was both, and therein
lies the challenge for researchers wanting to use AI to improve health
care. AI is a data-hungry beast,
thriving on huge databases and sweeping up any scrap of information in its
maw. One of the best features of
Google’s medical AI system is that it doesn’t need to have the raw data gussied
up, in terms of needing a human being to type messy notes into a form the
computer can use, a process that consumes as much as 80% of the effort devoted
to other AI medical software. Google’s
system takes almost any kind of hand-scrawled data and integrates it into
patient evaluations. So in order
to help, the system needs to know everything, no matter how apparently trivial
or unrelated to the case it may be.
But then the human aspect enters. To make my point, I’ll draw an analogy to a
differetn profession—banking. I’m old
enough to remember when bankers evaluated customers with a combination of hard
data—loans paid off in the past, bank balances, and so on—and intuition gained
from meeting and talking with the customer.
Except for maybe a few high-class boutique banks, this is no longer the
case. The almighty credit score ground
out by opaque algorithms reigns, and no amount of personal charm exerted for
the benefit of a loan officer will overcome a low credit score.
It’s one thing when we’re talking about loans, and another
when the subject is human lives. It’s
easy to imagine a dystopian narrative involving a Google-like AI program that
comes to dominate the decision-making process in a situation where medical
resources are limited and there are more patients needing expensive care than
the system can handle. Doctors will turn
to their AI assistants and ask, “Which of these five patients is most likely to
benefit from a kidney transplant?” It’s
likely that some form of this process already goes on today, but is limited to
comparatively rare situations such as transplants.
The U. S. government’s Medicare system is currently forecast
to become insolvent eight years from now.
Even if Congress manages to bail it out, the flood of aging baby-boomers
such as myself will threaten to overwhelm the nation’s health-care system. In such a crisis, the temptation to use AI
algorithms to allocate limited resources will be overwhelming.
From an engineering-efficiency standpoint, it all makes
sense. Why waste limited resources on
someone who isn’t likely to benefit from them, when another person may get them
and go on to live many years longer?
That’s fine except for two things.
One, even the best AI systems aren’t perfect, and now and
then there will be mistakes—sometimes major ones.
And two, what if an AI medical system tells you you’re not going to get that
treatment that might make the difference between life or death? Even the hardiest utilitarian (“greatest
benefit for the greatest number”) may have second thoughts about that outcome.
Of course, resource
allocation in health care is nothing new.
There have always been more sick people than there have been facilities
to take care of them all. The way we’ve
done it in the past has been a combination of economics, the judgment of
medical personnel, and government intervention from time to time. As computers made inroads into various parts
of the process, it’s only natural that they be used along with other available
means to make wise choices. But there’s a
difference between using computers as tools and completely turning over decision-making
to an algorithm.
Another concern raised about Google’s foray into applying AI
to health care is the issue of privacy.
Medical records are among the most sensitive types of personal data, and
in the past, elaborate precautions have been taken to guard the sanctity of
each individual’s records. But AI
algorithms work better the more data they have, and so simply for the purpose
of getting better at what they do, these algorithms will need access to as much
data as they can get their digital hands on.
According to one survey, less than half of the public trusts Google to
keep their data private. While that is
just a perception, it’s a perception that Google, and the medical profession in
general, ignore at their peril. One
scandal or major data breach involving medical records could set back the
entire medical-AI industry, so all participants will need to tread carefully
and make sure nothing like that happens, or else the whole experiment could come
to a screeching halt.
Predicting when people will die is only one of the many
abilities that medical AI of the future offers.
In figuring out hard-to-diagnose cases, in recommending treatment
customized to the individual, and in optimizing the delivery of health care
generally, it shows great promise in making health care more effective and
efficient for the vast majority of patients.
But doctors and other medical authorities should beware of letting the
algorithms gain the upper hand, and turning their judgment and ethics over to a
printout, so to speak. Because Google’s
system is still in the prototype stage, we don’t know what the effects of its
more widespread deployment will be. But
whatever form it takes, we need to make sure that the vital life-or-death
decisions involved in medical care are made by responsible people, not just
machines.
No comments:
Post a Comment