Monday, March 13, 2023

Justice and Artificial Intelligence

 

Historians of technology are familiar with the problem of technological advances that outstrip the legal system, leading to situations that are clearly unfair, but leave some people with no legal recourse.  In a recent article in The Dispatch, artificial intelligence (AI) expert Matthew Mittelstaedt calls for a case-by-case approach to the problem of AI advancing beyond the borders of the law.

           

In the article, reporter Alec Dent cited the case of someone who recently filed for a copyright on an AI-generated piece of artwork.  The U. S. Copyright Office rejected the application, which was filed on behalf of the AI program itself, because it lacked the "human authorship" needed to create a valid copyright claim.  We don't know what would have happened if the programmer or software developer would have filed for a copyright on his or her own behalf, saying that the AI program was just a tool like an artist's brush.  But as AI systems take over more and more formerly creative jobs done by humans, the distinction may become harder to make.

 

ChatGPT, the powerful AI chatbot developed by the OpenAI firm, has attracted a lot of attention in academia for its ability to come up with plausible-sounding paragraphs of English text on virtually any topic, including those typically assigned as essay questions in the humanities.  It's not exactly like students weren't cheating before ChatGPT—there are numerous online stores where essays and even completed lab reports and exams can be purchased.  But ChatGPT creates work that, if not exactly original, hasn't existed in exactly the same form before, and thus falls between the stool of previously existing work that is merely copied, and the stool of truly original work that a human mind has originated. 

 

The original intent of copyright law, and anti-cheating rules at universities for that matter, was to allocate the rewards (or penalties) due to a piece of original work fairly.  If Mr. A wrote an essay on the downfall of the Soviet Union, whether it was for his history class or for The Atlantic, Mr. A should get the credit (or blame) for it, not some word-processing software or AI program. 

 

One big problem I foresee will arise from the defective anthropology that prevails in our modern culture.  Human beings are different from other animals and from machines due to a difference in kind, not merely in degree.  In the rush to embrace ever more impressive applications of AI, a lot of people have lost sight of this fact, if they ever believed it in the first place. 

 

The people at the U. S. Copyright Office seem to believe it, at least so far, but the person who applied for a copyright in the name of an AI program seems to think that there's no essential difference between human intelligence and AI.  Unfortunately, that view is very popular in high places—much of academia, the Silicon Valley world, and even in parts of government. 

 

A degraded view of humanity results when one assumes that there is no essential distinction between humans and advanced AI programs.  Of course there is a practical distinction, so far—AI systems still make stupid mistakes that a normally intelligent five-year-old wouldn't make.  But the AI optimists see this as merely a temporary condition that will disappear with further advances in the field.

 

Saying that human intelligence and AI is basically the same drags humans down to the level of machines.  And all the consequences of treating humans as machines will result from that attitude.  No AI expert wants to be treated like a machine.  But allowing AI programs to hold copyrights or be in charge of decisions that formerly required human input does exactly that to the subjects or patients of the AI system in question.

 

Justice is something that happens among human beings, and human beings are not machines.  People advocating for robot rights and similar policies that attempt to attribute human-like characteristics to AI programs are not exalting robots—they are degrading humans in a subtle and indirect way, a way that selectively degrades some humans more than others. 

 

We have already heard of cases in which AI-informed medical or legal decisions turned out to be highly discriminatory against certain minority groups.  The AI developers are rightly exercised about such problems, but it takes human beings to recognize that other human beings are being treated unfairly.  The whole body of the law can be regarded as a huge algorithm for executing justice among people—after all, what are laws but a set of rules and procedures for carrying them out? 

 

But as of yet, we have not handed over the execution of the laws to machines.  Lawyers, judges, and juries do that.  The institution of the jury trial, as rare as it's getting to be in criminal justice, is a common-law recognition that ordinary people deserve to be judged by other ordinary people, not just experts, whether the experts are human beings or computers. 

 

Turning over works of creativity and judgment to AI systems may be efficient.  It may even be fairer in a human sense than leaving such actions in the fallible hands of human beings.  But beyond a certain point—that point to be judged by humans, not by machines—it becomes a dereliction of duty, just as a student typing in his history assignment to ChatGPT and handing in the AI program's answer is a dereliction of his duty to think for himself.

 

The law will eventually catch up to today's AI innovations, as it always does.  Of course, by then we will have new advances, and so for a time at least we will see a kind of legal-AI arms race with AI leading and the law lagging behind.  But we will all be losers if legislators and AI developers forget that human beings are different in kind, not just degree, from AI programs.  If we forget that critical fact, we may deserve what happens if we do.

 

Sources:  Alan Dent's article "The Gaps Between the Law and Artificial Intelligence" was published on Mar. 8, 2023 at https://thedispatch.com/article/the-gaps-between-the-law-and-artificial-intelligence/.  I also referred to Wikipedia articles on ChatGPT and OpenAI.

 

NOTE:  There used to be an RSS feed that readers could subscribe to who wished to be notified of new blog articles, which are issued every Monday morning.  A reader recently pointed out to me that this feature was no longer functioning.  I am currently trying to repair the problems and add an automatic email notification system, but it may take a while, so I ask readers interested in these features to be patient.

No comments:

Post a Comment