skip to main |
skip to sidebar
Judging the Judgments of AI
If New
York City mayor Bill De Blasio allows a new bill passed by the city council to
go into effect, employers who use artificial-intelligence (AI) systems to
evaluate potential hires will be obliged to conduct a yearly audit of their systems
to show they are not discriminating with regard to race or gender. Human resource departments have turned to AI
as a quick and apparently effective way to sift through the mountains of
applications that Internet-based job searches often generate. AI isn't limited to hiring, though, as
increasing numbers of organizations are using it for job evaluations and other
personnel-related functions.
Another
thing the bill would do is to give candidates an option to choose an
alternative process by which to be evaluated.
So if you don't want a computer evaluating you, you can ask for another
opinion, although it isn't clear what form this alternative might take. And it's also not clear what would keep every
job applicant from asking for the alternative at the outset, but maybe you have
to be rejected by the AI system first to request it.
In any
case, New York City's proposed bill is one of the first pieces of legislation
designed to address an increasingly prominent issue: the question of unfair discrimination by AI
systems.
Anyone
who has been paying attention to the progress of AI technology has heard some
horror stories about things as seemingly basic as facial recognition. An article in the December issue of Scientific
American mentions that MIT's Media Lab found poorer accuracy in
facial-recognition technologies when non-white faces were being viewed than
otherwise.
Those
who defend AI can cite the old saying among software engineers: "garbage in, garbage out." The performance of an AI system is only as
good as the set of training data that it uses to "learn" how to do
its job. If the software's designers select
a training database that is short on non-white faces, for example, or women, or
other groups that have historically been discriminated against unfairly, then
its performance will probably be inferior when it deals with people from those
groups in reality. So one answer to discriminatory
outcomes from AI is to improve the training data pools with special attention
being paid to minority groups.
In
implementing the proposed New York City legislation, someone is going to have
to set standards for the non-discrimination audits. Out of a pool of 100 women and 100 men who
are otherwise equally qualified, on average, what will the AI system have to do
in order to be judged non-discriminatory?
Picking 10 men and no women would be ruled out of bounds, I'm pretty
sure. But what about four women and six
men? Or six women and four men? At what point will it be viewed as
discriminating against men? Or do the
people enforcing the law have ideological biases that make them consider discriminating
against men to be impossible? So far,
none of these questions have been answered.
Perhaps
the best feature of the proposed law is not the annual-audit provision, but the
conferral of the right to request an alternative evaluation process. There is a trend in business these days to
weed out any function or operation that up to now has been done by people, and
replace the people with software. There
are huge sectors of business operations where this transition is well-nigh
complete.
Credit
ratings, for example, are accepted by nearly everyone, lendors and borrowers
alike, and are generated almost entirely by algorithms. The difference between this process and AI
systems is that, in principle at least, one can ask to see the equations that
make up one's credit rating, although I suspect hardly anyone does. The point is that if you ask how your credit
rating was arrived at, someone should be able to tell you how it was done.
But AI
is a different breed of cat. For the
newest and most effective kinds (so-called "deep neural networks")
even the software developers can't tell you how the system arrives at a given
decision. If it's opaque to its
developers, the rest of us can give up any hope of understanding how it
works.
Being
considered for a job isn't the same as being tried for a crime, but there are
useful parallels. In both cases, one's
past is being judged in a way that will affect one's future. One of the most beneficial traditions of
English common law is the custom of a trial by a jury of one's peers. Although trial by jury itself has fallen on
hard times because the legal system has gone in for the same efficiency
measures that the business world goes for (some judges are even using AI to help
them decide sentence terms), the principle that a human being should ultimately
be judged not by a machine, but by other human beings, is one that we abandon
at our peril.
Theologians
recognize that many heresies are not so much the stating of something that is
false, as they are the overemphasis of one true idea at the expense of the
other true ideas. If we make efficiency
a goal rather than simply a means to more important goals, we are going to run
roughshod over other more important principles and practices that have given
rise to modern Western civilization—the right to be judged by one's peers, for
example, instead of by an opaque and all-powerful algorithm.
New
York's city council is right to recognize that AI personnel evaluation can be
unfair. Whether they have found the best
way to deal with the problem is an open question. But at least they acknowledge that all is not
well with an AI-dominated future, and that something must be done before we get
so used to it that it's too late to recover what we've lost.
Sources: The AP news story by Matt O'Brien entitled
"NYC aims to be first to rein in AI hiring tools" appeared on Nov. 19
at https://apnews.com/article/technology-business-race-and-ethnicity-racial-injustice-artificial-intelligence-2fe8d3ef7008d299d9d810f0c0f7905d. The Scientific American article "Spying
On Your Emotions" by John McQuaid (pp. 40-47) was in the December 2021
issue.
No comments:
Post a Comment