Monday, May 27, 2024

The Seoul AI Summit: Serious Safety Progress or Window-Dressing?

 

Last Tuesday, representatives from the world's heavy hitters in "artificial intelligence" (AI) made a public pledge at a mini-summit in Seoul, South Korea to make sure AI develops safely.  Google, Microsoft, Amazon, IBM, Meta (of which Facebook and other social-media platforms are a part), and OpenAI all agreed on voluntary safety precautions, even to the extent of cutting off systems that present extreme risks.

 

This isn't the first time we've seen such apparent unanimity on the part of companies that otherwise act like rivals.  This meeting was actually a followup to a larger one last November in Bletchley Park, England, at which a "Bletchley Declaration" was signed.  I haven't read it, but reportedly it contains promises about sharing responsibility for AI risks and holding further meetings, such as the one in South Korea last week.

 

Given the cost and time spent by upper executives, we should ask why such events are held in the first place.  One reason could be is that it's an opportunity to generate positive news coverage.  Your company and a lot of others pay lots of money to send prominent people to a particular place where the media would be downright negligent if they didn't cover it.  And whatever differences the firms have outside the meeting, they manage to put up a united front when they sign and announce declarations full of aspirational language like "pledge to cooperate" and "future summits to define further" and so on. 

 

One also has to ask whether such meetings make any difference to the rest of us.  Will AI really be any safer as a result of the Bletchley Declaration or the Seoul summit?  The obvious answer is, in some ways it's too early to tell.

 

Every now and then, a summit that looks like it's mainly for window-dressing publicity and spreading good vibes turns out to have amounted to a genuine advance in the cause it is concerned with.  In 1975, a group of molecular biologists and other biotechnology professionals gathered in Asilomar, California to discuss the ethical status and future of a technology called recombinant DNA.  Out of safety concerns, scientists worldwide had halted such research, and the urgent task of the meeting was to hammer out principles under which research could safely go forward. 

 

The scientists did reach an agreement about what types of research were allowed and what were prohibited.  Among the prohibited types were experiments to clone DNA from highly pathogenic organisms.  It's not clear to me whether this would have stopped the kind of research that went on in the Wuhan laboratories that are suspected of originating COVID-19.  But it would have been nice if it had.

 

Historians of science look back on the Asilomar conference as a new step in bringing safety concerns about science before the public, and in reaching public agreements about rules to follow.  So such summits can do some good.

 

However, there are differences between the 1975 Asilomar meeting and the kinds of meetings held by AI firms in Bletchley Park and Seoul.  For one thing, at Asilomar, the participants were the same people that were doing the work they were talking about, and there weren't that many of them—only about 140 scientists attended.  I seriously doubt that the people at the UK and Korea AI safety meetings were exclusively working AI engineers and scientists, although I could be wrong.  Such technical types rarely have the clout to sign any kind of document committing the entire firm to anything more than buying pencils, let alone making a high-sounding safety pledge.  No, you can rest assured that these were upper-management types, which is probably one reason that the texture of the agreements resembled cotton candy—it looks pretty, it even tastes good, but it's mostly air and there's nothing really substantial to it.

 

My standard response to anyone who asks me whether AI will result in widespread harm is, "It already has."  And then I give my standard example.

 

If you look at how American democracy operated in, say, 1964, and compare it to how it works today, you will note some differences.  Back then, most people got more or less the same news content, which was delivered in carefully crafted forms such as news releases and news conferences.  The news then could be compared to a mass-produced automobile, which went through dozens of hands, inspections, and safety checks before being delivered to the consumer.

 

Today, on the other hand, news comes in little snippets written by, well, anybody who wants to write them.  Huge complicated diplomatic issues are dealt with by the digital equivalent of throwing little handwritten notes out the window.  And everybody gets a different customized version of reality, designed not to inform but to inflame and inspire clicks, with factual accuracy being down between priorities number 20 and 30 of the list of priorities internalized by the same firms we saw gathering last week in Seoul. 

 

The results of the deep embedding of what amounts to AI (with a small fraction of the work being done by humans) in social media are all around us:  a dysfunctional government that has lost most of whatever respect from the public it ever had; an electoral process that has delivered two of the least-liked presidential candidates in over a century; and a younger generation which is the most unhappy, fragile, and pessimistic one in decades. 

 

While it is true that AI is not exclusively responsible for these ills, it is inextricably implicated in them.

 

For the heck of it, I will wind up this piece with a Latin quotation:  Si monumentum requiris circumspice.  It means "If you seek his monument, look around you."  It is engraved on the tombstone of Sir Christopher Wren, the architect of St. Paul's Cathedral in London, where he is buried.  We can apply the same phrase to the workings of AI as it has been applied to social media.  Instead of holding meetings that issue noble-sounding broad declarations of pledges to develop AI safely, I would be a lot more convinced of the firms' sincerity if they put together a lot of working engineers and scientists and told them to fix what has already been broken.  But that would mean they would first have to admit they broke it, and they don't want to do that.

 

Sources:  An Associated Press article on the Seoul AI Safety mini-summit appeared at https://apnews.com/article/south-korea-seoul-ai-summit-uk-2cc2b297872d860edc60545d5a5cf598.  I also referred to Wikipedia articles on "Asilomar Conference on Recombinant DNA" and Christopher Wren.

No comments:

Post a Comment