Monday, November 06, 2023

The Biden Administration Tackles AI Regulation—Sort Of

 

In our three-branch system of government, the power of any one branch is intentionally limited so that the democratic exercise of the public will cannot be thwarted by any one branch going amok.  This division of power leads to inefficiency and sometimes confusion, but it also means that the damage done by any one branch—executive, legislative, or judicial—is limited compared to what a unified dictatorship could do.

 

We're seeing the consequences of this division in the recent executive order announced by the Biden administration on the regulation of artificial intelligence (AI).  One take on the fact sheet that preceded the 63-page order itself appeared on the website of IEEE Spectrum, a general-interest magazine for members of IEEE, the largest organization of professional engineers in the world. 

 

It's interesting that reactions from most of the technically-informed people interviewed by the Spectrum editor were guardedly positive.  Lee Tiedrich, a distinguished faculty fellow at Duke University's Initiative for Science and Society, said ". . . the White House has done a really good, really comprehensive job."  She thinks that while respecting the limitations of executive-branch power, the order addresses a wide variety of issues with calls to a number of Federal agencies to take actions that could make a positive difference.

 

For example, the order charges the National Institute of Standards and Technology (NIST) with developing standards for "red-team" testing of AI products for safety before public release.  Red-team testing involves purposefully trying to do malign things with a product to see how bad the results can get.  Although NIST doesn't have to do the testing itself, coming up with rigorous standards for such testing in the manifold different circumstances that AI is being used for may prove to be a challenge that exceeds the organization's current capability.  Nevertheless, you don't get what you don't ask for, and as a creature of the executive branch, NIST is obliged at least to try.

 

The U. S. Department of Commerce will develop per this order "guidance for content authentication and watermarking to clearly label AI-generated content."  Cynthia Rudin, a Duke professor of computer science, sees that some difficulty may arise when the question of watermarking AI-generated text comes up.  Her point seems to be that such watermarking is hard to imagine other than seeing (NOTE:  AI-GENERATED TEXT) inserted every so often in a paragraph, which would be annoying, to say the least.  (You have my guarantee that not one word of this blog is AI-generated, by the way.)

 

Other experts are concerned about the use of data sets for training AI-systems, especially the intimidatingly-named "foundational AI" ones that are used as a basis for other systems with more specific roles.  Many training data sets include a substantial fraction of worldwide Internet content, including millions of copyrighted documents, and concern has been raised about how copyrighted data is being exploited by AI systems without remuneration to the copyright holders.  Susan Ariel Aaronson of George Washington University hopes that Congress will take more definite action in this area to go beyond the largely advisory effect that Biden's executive order will have.

 

This order shares in common with other recent executive orders a tendency to spread responsibilities widely among many disparate agencies, a feature that is something of a hallmark of this administration.  On the one hand, this type of approach is good at addressing an issue that has multiple embodiments or aspects, which is certainly true of AI.  Everything from realistic-looking deepfake photos, to genuine-sounding legal briefs, to functioning computer code has been generated by AI, and so this broad-spectrum approach is an appropriate one for this case.

 

On the other hand, such a widely-spread initiative risks getting buried in the flood of other obligations and tasks that executive agencies have to deal with, ranging from their primary purposes (NIST must establish measurement standards; the Department of Commerce must deal with commerce, etc.) and other initiatives such as banning workplace discrimination against LGBT employees, one of the things that Biden issued an executive order for in his first day of office.  This is partly a matter of publicity and public perception, and partly a question of priorities that the various officials in charge of the various agencies set.  With the growing number of Federal employees, it's an open question as to what administrative bang the taxpayer is getting for his buck.  Regulation of AI is something that there is widespread agreement on—the extreme-case dangers have become clearer in recent months and years, and nobody wants AI to take over the government or the power grid and start treating us all like lab rats that the AI owner has no particular use for anymore. 

 

But how to avoid both the direst scenarios, as well as the shorter-term milder drawbacks that AI has already given rise to, is a thorny question, and the executive order will only go a short distance toward that goal.

 

One nagging aspect of AI regulation is the fact that the new large-scale "generative AI" systems trained on vast swathes of the Internet are starting to do things that even their developers didn't anticipate:  learning languages that the programmers hadn't intended the system to learn, for example.  One possible factor in this uncontrollability aspect of AI that no one in government seems to have considered, at least out loud, is dwelled on at length by Paul Kingsnorth, an Irish novelist and essayist who wrote "AI Demonic" in the November/December issue of Touchstone magazine.  Kingsnorth seriously considers the possibility that certain forms and embodiments of AI are being influenced by a "spiritual personification of the age of the Machine" which he calls Ahriman. 

 

The name Ahriman is associated with a Zoroastrian evil spirit of destruction, but Kingsnorth describes how it was taken up by the theosophist Rudolf Steiner, and then an obscure computer scientist named David Black who testified to feeling "drained" by his work with computers back in the 1980s.  The whole article should be read, as it's not easy to summarize in a few sentences.  But Kingsnorth's basic point is clear:  in trying to regulate AI, we may be dealing with something more than just piles of hardware and programs.  As St. Paul says in Ephesians 6:12, ". . . we wrestle not against flesh and blood [and server farms], but against principalities, against powers, against the rulers of the darkness of this world, against spiritual wickedness in high places." 

 

Anyone trying to regulate AI would be well advised to take the spiritual aspect of the struggle into account as well.

 

Sources:  The IEEE Spectrum website carried the article by Eliza Strickland, "What You Need to Know About Biden's Sweeping AI Order" at https://spectrum.ieee.org/biden-ai-executive-order.  I also referred to an article on AI on the Time website at https://time.com/6330652/biden-ai-order/.  Paul Kingsnorth's article "AI Demonic" appeared in the November/December 2023 issue of Touchstone, pp. 29-40, and was reprinted from Kingsnorth's substack "The Abbey of Misrule."

No comments:

Post a Comment