Monday, July 24, 2023

Is AI Better Regulated After the White House Meeting?

 

The easy answer is, it's too soon to tell.  But for a number of reasons, the July 21 meeting between President Biden and leaders of seven Big Tech firms, including Google, Microsoft, and OpenAI, may prove to be more show than substance.

 

Admittedly, there is widespread agreement that some sort of regulation of artificial intelligence (AI) should be considered.  Even industry leaders such as Elon Musk have been warning that things are moving too fast, and there are small but real risks of huge catastrophes lurking out there that could be averted by agreed-upon restrictions or regulations of the burgeoning AI industry.

 

Last Friday's White House meeting of representatives from seven leading AI firms—Amazon, Anthropic, Google, Inflection, Meta (formerly Facebook), Microsoft, and OpenAI—produced a "fact sheet" that listed eight bullet-point commitments made by the participants.  The actual meeting was not open to the public, but one presumes the White House would not publish such things without at least the passive approval of the participants. 

 

Browsing through the items, I don't see many things that a prudent giant AI corporation wouldn't be doing already.  For example, take "The companies commit to internal and external security testing of their AI systems before their release."  Not to do any security testing would be foolish.  External testing, meaning testing by third-party security firms, is probably pretty common in the industry already, although not universal. 

 

The same thing goes for the commitment to "facilitating third-party discovery and reporting of vulnerabilities in their AI systems."  No tech firm worth its salt is going to ignore an outsider's legitimate report of finding a weak spot in their products, and so this is again something that the firms are probably doing already. 

 

The most technical commitment, but again one that the companies are probably doing already, is to "protect proprietary and unreleased model weights."  Unversed as I am at AI technicalities, I'm not sure exactly what this means, but the model weights appear to be something like the keys to how a given AI system runs once it's been trained, and so it only stands to reason that the companies would protect assets that cost them a lot of computing time to obtain, even before the White House told them to do that.

 

Four bullet points address "Earning the Public's Trust," which, incidentally, implies that the firms have a considerable way to go to earn it.  But we'll let that pass. 

 

The firms commit to developing some way of watermarking or otherwise indicating when "content is AI generated."  That's all very well, but the answer to such a question is rarely just yes or no.  What if some private citizen takes a watermarked AI product and incorporates it manually into something else that is no longer watermarked?  The intention is good, but the path to execution is foggy, to say the least.

 

Perhaps the commitment with the most bite is this one:  "The companies commit to publicly reporting their AI systems’ capabilities, limitations, and areas of appropriate and inappropriate use."  The wording is broad enough to drive a truck through, although again, the intention is good.  How often, how detailed, and how extensive such reports may be is left up to the company.

 

The last two public-trust items commit the firms to "prioritizing" research into the societal risks of AI, and to use AI to address "society's greatest challenges."  If I decide not to wash my car today, I have prioritized washing my car—negatively, it is true, but hey, I said I'd prioritize it!

 

So what is different about the way these firms will carry out their AI activities after the White House meeting?  A lot of good intentions were aired, and if the firms happened to enjoy a lot of public trust in the first place, these good intentions might have found an audience who believes that they will be carried out.

 

But the atmosphere of cynicism which has gradually encroached on almost all areas of public life make such an eventuality unlikely, to say the least.  And this cynicism has arisen due in no small part to the previous doings of the aforementioned Big Tech firms—specifically, their activities in social media.

 

When you compare the health of what you might call the body politic of the United States today with what it was say, fifty years ago, the comparison is breathtaking.  In 1973, 42% of those U. S. residents surveyed said they had either a "great deal" or "quite a lot" of confidence in Congress.  Only 16% said they had "very little" confidence.  In 2023, the number of people with either a great deal or quite a lot of confidence is only 8%, and fully 48% say they have "very little" confidence in Congress.  While this trend has been happening for years, much of it has occurred only since 2018, after the social-media phenomenon overtook legacy media as the main conduit of political information exchange—if one can call it that.

 

Never mind what AI may do in the future.  We are standing in the wreckage of something it has done already:  it has caused great and perhaps permanent damage to the primary means by which a nation has of governing itself.  Not AI alone, to be sure, but AI has played an essential role in the way companies have profited from encouraging the worst in people. 

 

It would be nice if last Friday's White House meeting triggered a revolution in the way Big Tech uses AI and its other boxes of tricks to encourage genuine human flourishing without the horrific side effects in both personal lives and in public institutions that we have seen already.  But getting some CEOs in a private room with the President and issuing a nice-sounding press release afterwards isn't likely to do that.  It's a step in the right direction, but a step so tiny that it's almost not worth talking about. 

 

Historically, needed technical regulations have come about only when vivid, graphic, and (usually) sudden harm has been caused.  The kinds of damage AI can do are rarely that striking, so we may have to wait quite a while before meaningful AI regulations are even considered.  But in my view, it was already high time years ago.

 

Sources:  The White House press release on the July 21 meeting of AI firms with President Biden can be found at https://www.whitehouse.gov/briefing-room/statements-releases/2023/07/21/fact-sheet-biden-harris-administration-secures-voluntary-commitments-from-leading-artificial-intelligence-companies-to-manage-the-risks-posed-by-ai/.  I also referred to a PBS report on the meeting at

https://www.pbs.org/newshour/politics/watch-live-biden-announces-ai-safeguards-after-meeting-with-tech-leaders.  The Gallup poll historical data on confidence in Congress can be found at

https://news.gallup.com/poll/1597/confidence-institutions.aspx.

No comments:

Post a Comment