Monday, May 27, 2024

The Seoul AI Summit: Serious Safety Progress or Window-Dressing?

 

Last Tuesday, representatives from the world's heavy hitters in "artificial intelligence" (AI) made a public pledge at a mini-summit in Seoul, South Korea to make sure AI develops safely.  Google, Microsoft, Amazon, IBM, Meta (of which Facebook and other social-media platforms are a part), and OpenAI all agreed on voluntary safety precautions, even to the extent of cutting off systems that present extreme risks.

 

This isn't the first time we've seen such apparent unanimity on the part of companies that otherwise act like rivals.  This meeting was actually a followup to a larger one last November in Bletchley Park, England, at which a "Bletchley Declaration" was signed.  I haven't read it, but reportedly it contains promises about sharing responsibility for AI risks and holding further meetings, such as the one in South Korea last week.

 

Given the cost and time spent by upper executives, we should ask why such events are held in the first place.  One reason could be is that it's an opportunity to generate positive news coverage.  Your company and a lot of others pay lots of money to send prominent people to a particular place where the media would be downright negligent if they didn't cover it.  And whatever differences the firms have outside the meeting, they manage to put up a united front when they sign and announce declarations full of aspirational language like "pledge to cooperate" and "future summits to define further" and so on. 

 

One also has to ask whether such meetings make any difference to the rest of us.  Will AI really be any safer as a result of the Bletchley Declaration or the Seoul summit?  The obvious answer is, in some ways it's too early to tell.

 

Every now and then, a summit that looks like it's mainly for window-dressing publicity and spreading good vibes turns out to have amounted to a genuine advance in the cause it is concerned with.  In 1975, a group of molecular biologists and other biotechnology professionals gathered in Asilomar, California to discuss the ethical status and future of a technology called recombinant DNA.  Out of safety concerns, scientists worldwide had halted such research, and the urgent task of the meeting was to hammer out principles under which research could safely go forward. 

 

The scientists did reach an agreement about what types of research were allowed and what were prohibited.  Among the prohibited types were experiments to clone DNA from highly pathogenic organisms.  It's not clear to me whether this would have stopped the kind of research that went on in the Wuhan laboratories that are suspected of originating COVID-19.  But it would have been nice if it had.

 

Historians of science look back on the Asilomar conference as a new step in bringing safety concerns about science before the public, and in reaching public agreements about rules to follow.  So such summits can do some good.

 

However, there are differences between the 1975 Asilomar meeting and the kinds of meetings held by AI firms in Bletchley Park and Seoul.  For one thing, at Asilomar, the participants were the same people that were doing the work they were talking about, and there weren't that many of them—only about 140 scientists attended.  I seriously doubt that the people at the UK and Korea AI safety meetings were exclusively working AI engineers and scientists, although I could be wrong.  Such technical types rarely have the clout to sign any kind of document committing the entire firm to anything more than buying pencils, let alone making a high-sounding safety pledge.  No, you can rest assured that these were upper-management types, which is probably one reason that the texture of the agreements resembled cotton candy—it looks pretty, it even tastes good, but it's mostly air and there's nothing really substantial to it.

 

My standard response to anyone who asks me whether AI will result in widespread harm is, "It already has."  And then I give my standard example.

 

If you look at how American democracy operated in, say, 1964, and compare it to how it works today, you will note some differences.  Back then, most people got more or less the same news content, which was delivered in carefully crafted forms such as news releases and news conferences.  The news then could be compared to a mass-produced automobile, which went through dozens of hands, inspections, and safety checks before being delivered to the consumer.

 

Today, on the other hand, news comes in little snippets written by, well, anybody who wants to write them.  Huge complicated diplomatic issues are dealt with by the digital equivalent of throwing little handwritten notes out the window.  And everybody gets a different customized version of reality, designed not to inform but to inflame and inspire clicks, with factual accuracy being down between priorities number 20 and 30 of the list of priorities internalized by the same firms we saw gathering last week in Seoul. 

 

The results of the deep embedding of what amounts to AI (with a small fraction of the work being done by humans) in social media are all around us:  a dysfunctional government that has lost most of whatever respect from the public it ever had; an electoral process that has delivered two of the least-liked presidential candidates in over a century; and a younger generation which is the most unhappy, fragile, and pessimistic one in decades. 

 

While it is true that AI is not exclusively responsible for these ills, it is inextricably implicated in them.

 

For the heck of it, I will wind up this piece with a Latin quotation:  Si monumentum requiris circumspice.  It means "If you seek his monument, look around you."  It is engraved on the tombstone of Sir Christopher Wren, the architect of St. Paul's Cathedral in London, where he is buried.  We can apply the same phrase to the workings of AI as it has been applied to social media.  Instead of holding meetings that issue noble-sounding broad declarations of pledges to develop AI safely, I would be a lot more convinced of the firms' sincerity if they put together a lot of working engineers and scientists and told them to fix what has already been broken.  But that would mean they would first have to admit they broke it, and they don't want to do that.

 

Sources:  An Associated Press article on the Seoul AI Safety mini-summit appeared at https://apnews.com/article/south-korea-seoul-ai-summit-uk-2cc2b297872d860edc60545d5a5cf598.  I also referred to Wikipedia articles on "Asilomar Conference on Recombinant DNA" and Christopher Wren.

Monday, May 20, 2024

What "IF" Says About AI and Love

 

The John Krasinski movie "IF" came out this past weekend, and my wife and I went to see it.  I won't have to put in a spoiler alert if all I say here is that it's about imaginary friends that children came up with and then abandoned, only to meet their "IFs" again later in life.  What has this got to do with engineering ethics?  Several things, actually.

 

For one, one of our culture's most popular art forms—the cinema—is deeply embedded in state-of-the-art technology that allows entirely imaginary beings to appear onscreen with actual people, looking as realistic as the hairs on your head.  Yes, animated cinema has a century-long history, but the seamless integration of live action and dreamed-up entities such as Blue, the nine-foot-tall purple fuzzball that appears in ads for IF, has been possible for only the last few decades, and relies on a small army of animators and other technical people plus the best CGI technology money can buy. 

 

For another thing, "IF" focuses on the roles played by, let's face it, figments of our youthful imaginations.  As my wife and I were talking after the film, she stated that she was sure she had an imaginary playmate growing up, while I could not recall any such thing, although I enjoyed many imaginary adventures with real friends before the age of about 12.  Whether or not you had an IF yourself, you can understand that many children do. 

 

The movie leaves unexplored the question of why kids make up imaginary friends, and instead treats the IFs as entirely independent souls, despondent that their former playmates left them behind.  I use the word "soul" intentionally, because the beings in question have intelligence and will.  Being so endowed, they are capable of love, which the movie clearly signals as the ultimate outcome when an abandoned IF is reunited with his or her child, no matter what the child's present age is.

 

As touching as many of the scenes that reunited an IF with its soulmate were, I personally found the most moving part of the film to be a scene that relied on a person, a technology, and a work of art which all originated in the mid-20th century.  The person was the grandmother of the main character, the twelve-year-old Bea.  Grandma is portrayed as well-intentioned, but remote and clueless about how time has changed her granddaughter, whom she apparently hasn't seen in several years.  The technology was a floor-model stereo record player, the type which gave rise to the immortal couplet "Enjoy your stereo often, then use it for a coffin."  And the work of art playing on the phonograph was Aram Khatchaturian's "Spartacus" ballet, to which the grandmother had danced at a public performance when she was about Bea's age.  To get her grandmother in touch with her inner child, the record is played by Bea, who watches as her otherwise bumbling and ineffectual ancestor transforms herself into a graceful ballerina there in her darkened New York apartment, illuminated only by city lights that profile her like stage spotlights during her dance.

 

Yes, the grandmother's imaginary friend experienced an E. T.-like revival once the grandmother remembered her earlier fleeting experience as a dancer.  But the true act of love in the scene was Bea's thoughtfulness in acting on the evidence of an old photograph, choosing the record, and playing it in Grandma's presence. 

 

And this is the quibble I have with the movie.  The characters' actions, the facial expressions, and even the musical score all telegraph that the reunion of adults with their abandoned IFs is the best thing that's ever happened to these people.  It's certainly the best thing that's happened to the IFs, whose plight is the engine that drives the plot forward.  But can anything that we make up ourselves, anything that we have complete control over, really be a source of meaningful love? 

 

This is not a trivial question, as we watch advanced AI chatbots such as ChatGPT and its successors and imitators proliferate at an unsettling speed.  Already, some of my recent Google inquiries have led with an AI-generated paragraph that I read without realizing it was from an AI system.  Only after I sensed something off or skewed about it did I notice that it was from Google's answer to ChatGPT. 

 

No, I am not a Luddite who wishes all AI to be plunged to the bottom of the sea.  But as large-language-model AI systems begin to imitate the sound of real humans more and more, we will be tempted to treat them that way, expecting more from them than they can deliver. 

 

For most children, an imaginary playmate is a harmless aid to play that, in its proper role, is the way we teach ourselves to become adults.  Krasinski cleverly shows the grandmother's TV playing scenes from "Harvey," the 1950 Jimmy Stewart film about a man with the wonderful name of Elwood P. Dowd, who imagines he has befriended a six-foot rabbit.  We should remember that Dowd ends up in a mental institution, though with an ultimately happy outcome. 

 

Writers and other storymaking types often say that once they have created a character, the character sometimes takes on a life of its own and does things that the writer never thought it would do.  Despite having fruitlessly attempted the writing of fiction, I can't say this has ever happened to me, and maybe that's why I never had an imaginary friend when I was a child.  But even writers know that their characters are simply figments, not realities capable of loving or hating real people.

 

The existential philosopher Martin Buber is famous for distinguishing two types of relationships.  One is the I-it relationship that souls have with the natural environment and human-created things.  The other type is the I-thou relationship, which can only happen between souls.  Regardless of the emotional weight put on them, imaginary friends and AI chatbots do not have souls, and we can only relate to them on an I-it basis. 

 

Both children, who are growing up these days in a very hostile environment for young people, and adults can only give and receive love in I-thou relationships between persons, or between a person and God.  While movies like "IF" say something worth listening to about our inner child, we err in hoping for that which an imaginary friend cannot give.

 

Monday, May 13, 2024

Why Did Chicago Shoot Down ShotSpotter?

ShotSpotter is an acoustic gunshot-detection system marketed by the public-safety technology firm SoundThinking and used by well over 100 cities in the U. S.  In some ways, it sounds like a law-enforcement dream come true.  Before ShotSpotter, a citizen who reported hearing gunshots could report them, but usually had no idea where the sound came from.  In an area covered by ShotSpotter, police can now often pinpoint the source of the gunshot with an accuracy in the range of 2 to 8 meters (about 6 to 26 feet).  What's not to like about ShotSpotter?

 

A lot, it turns out, at least if you're the mayor of Chicago.  Back in February, the office of Mayor Brandon Johnson, who won his first election campaign promising to end the use of ShotSpotter, announced that the city would not renew its ShotSpotter contract.  Understandably, having spent millions of dollars on the system, Chicago Police Superintendent Larry Snelling defends ShotSpotter.  He was quoted in an Associated Press report as saying, "If we're not utilizing technology, then we're falling behind in crime fighting." 

 

Mayor Johnson and other critics have three main charges against the way ShotSpotter is used.  They say it's "inaccurate."  That is of course a relative term.  In a technical paper published on the ShotSpotter website, the location accuracy was tabulated in a simulated test in a typical environment, with the results cited above (2 to 8 meters, typically).  With regard to false positives (saying there was a gunshot when there was actually some more benign sound such as a car backfiring) and false negatives (missing a true gunshot), the company claims that typically 96% of gunshots are detected correctly.  So although no system is perfect, ShotSpotter engineers seem to have achieved a remarkable success rate in a highly challenging acoustic environment by using advanced signal-processing techniques to enhance the accuracy of what is basically a time-of-flight location system. 

 

Another accusation leveled against the system as typically deployed in an urban setting is that it is racially biased.  A survey by Wired Magazine based on a leaked document giving the secret physical locations of about 25,000 ShotSpotter sensors backs up this accusation.  Wired found that about three-fourths of the neighborhoods where at least one ShotSpotter sensor was deployed were non-white, with an average income of about $50,000 a year.

 

When confronted with these results, SoundThinking senior vice president of forensic services Tom Chittum said that given a limited number of sensors, the company chooses to deploy them in areas that are "likely disproportionately impacted by gun violence."  In other words, they place sensors where they are most likely to pick up gunshot sounds.  For reasons that have nothing to do with ShotSpotter but are deeply rooted in historical and cultural factors, these neighborhoods tend to be poorer and where minority groups live. 

 

The third accusation is harder to refute:  that law-enforcement personnel "misuse" the data provided by ShotSpotter.  Critics cite cases in which police officers are deployed to a Shotspotter-indicated location and find bystanders who they then arrest and charge with violations unrelated to gun use.  Sometimes this leads to cases such as a Chicago grandfather arrested after a ShotSpotter location led officers to him, in which the accused was later released after a judge found insufficient evidence to convict him.

 

Undoubtedly, ShotSpotter has also assisted in the capture and conviction of real criminals.  Otherwise it seems hard to believe that police forces all over the country would spend hundreds of millions of dollars on it, unless they are all playing keep-up-with-the-Jonesville-police-department and making sure they have the latest technology just because it's there. 

 

In the case of Mayor Johnson dumping Chicago's ShotSpotter system, there has been no love lost between the mayor and the police force in general.  In April, Chicago's Fraternal Order of Police—their union—endorsed a drive to recall Mayor Johnson.  And in numerous ways, Mayor Johnson has made it no secret that he is highly critical of how the police do their job.

 

Not everyone in a neighborhood plagued by drug abuse, crime, and violence dislikes the police.  Lots of ordinary citizens would like to see more of the police than they do, and they are probably in favor of anything which helps the police do their job of fighting crime, including ShotSpotter.  After all, most of the sensors are on private property, and the owner's permission has to be granted for the sensor to be installed.  If there was a groundswell of opposition to ShotSpotter, you would think the company would have problems installing their sensors.

 

One of the most basic functions of a city's government is law enforcement.  But over the past few years, especially during the George Floyd riots of 2020, we have been treated to the spectacle of supposedly responsible officials proposing to defund entire police departments.  In the cities where movements in that direction actually gained headway, the results were in keeping with what common sense would predict:  soaring rates of crime and an exodus of both residents and businesses. 

 

District attorneys who make blanket announcements that certain types of crime will no longer be prosecuted find that those exact kinds of crimes proliferate.  How this policy is supposed to benefit society (unless you consider "society" to be restricted to the class of petty criminals) is not clear.

 

As a technology, ShotSpotter works about as well as the state of the art permits, given the challenging environment it works in.  But technology is always about more than technology.  The social environment and the way ShotSpotter results are used has led to a perception in some circles that it is just another way to beat down black people and other persecuted minorities.  Many police personnel are of the same race and culture of the people they are sworn to defend.  It is profoundly demoralizing to be told that a dangerous, tedious job which you do to the best of your ability is not only not appreciated by the highest official of the city, but actively criticized.  With regard to law enforcement and the use of ShotSpotter, Chicago is clearly a house divided.  And we know what happens to a house divided sooner or later:  it cannot stand.

 

Sources:  The Wired report on ShotSpotter sensor locations was published on Feb. 22, 2024 at https://www.wired.com/story/shotspotter-secret-sensor-locations-leak/.  I also referred to the Associated Press article "Chicago to stop using controversial gunshot detection technology this year" at https://apnews.com/article/shotspotter-chicago-gunshot-technology-mayor-f9a1b24d97a1f1efb80296dbe9aff1ed.  I referred to ShotSpotter Inc.'s technical note TN-098, "Precision and accuracy of acoustic gunshot location in an urban environment," dated Jan. 2020.

 


Monday, May 06, 2024

Synthesia: A Skateboard or a Crutch?

 

Last week, my wife and I joined some friends for supper at an unpretentious cafe in a nearby town.  It's the kind of place where the waitresses learn the customer's names and most people don't need to look at the menu before they order.  We've eaten there numerous times, and the food was good, as usual.

 

But since the last time we visited the place, something new had been added:  a small white electronic piano taking up a few feet of lunch-counter space, just as you came in the main door.  Seated at the piano was a teenage boy, and he was playing pop tunes.  I regret to say I can't remember what any of them were, but as we sat down close by, we had no trouble at all hearing what he was playing.

 

There was something off about his style, but I couldn't put my finger on it.  The base line sounded rather mechanical, and while there was a melody there wasn't much harmony with it, if any.  Still, you could recognize the tune, and while I didn't think this particular live music was much of an addition to the atmosphere, it wasn't cringingly bad, either.

 

Halfway through the meal, my wife, who was seated with a better view of the pianist, told me he wasn't using sheet music.  There was something on his smartphone that he was using instead.  I turned around to look.

 

On the kid's screen was a pattern of vertical dashes moving upward, and below the dashes was a small video of two hands moving around on a keyboard, playing the tune he was playing.  Older readers may be familiar with the concept of a player piano.  Before the days of high-quality sound recording, mechanical self-playing pianos were developed that used wide stiff-paper rolls with holes punched in them, and a device called a trackerbar read the hole patterns pneumatically as the roll unwound past the bar and the piano played in accordance with the pattern. 

 

Here was a piano roll, in digital form.  But instead of a player piano, there was a human piano player playing the part of a player piano trackerbar. 

 

After we finished the meal, I went up to him, dropped some money in his tip jar (which was already well stocked), and asked him what was on his smartphone.

 

"It's a YouTube video.  See, all I have to do is follow the keys.  I've taught myself.  It's easy."

Keeping up with the moving dots seemed like an impossible task to me, but then again, I was a little older than he was (he told me he was 16). 

 

A little research shows that the program which produces the piano-roll patterns is a video game called Synthesia.  I found a long Reddit thread in which various people chimed in on whether using Synthesia to play piano music was good, harmful, a waste of time, or what.

 

As is usually the case with these kinds of Internet debates, opinions are divided.  Most of the commenters came down on the negative side of using Synthesia exclusively.  One said, " . . . you probably won't find anyone using them if they're really good at piano, because learning from synthesia gets significantly harder the harder the songs get." 

 

On the other hand, one enthusiast said, "I'm very pro [synthesia].  I have memorized Maple Leaf Rag, Fur Elise, and currently working on a very advanced piece.  I'm an adult beginner and piano is not a huge priority in my life.  I don't really want to learn to play the 'right' way.  I just want to go straight to playing cool pieces."

 

The fact that conventional musical notation is essentially a foreign language that one has to learn came up a lot.  A person who could read music and also uses Synthesia-based videos says that he picks up non-classical music with Synthesia, although he usually knows the piece already from having learned it from sheet music.

 

While I was secretly hoping to find some manifesto by the Piano Teachers' Guild coming out with guns blazing to condemn Synthesia, there is apparently no such document.  Instead, several online commenters say it can be good for beginners, but definitely no substitute for learning to read sheet music the old-fashioned way. 

 

It wouldn't surprise me at all to learn that back when player pianos became available around 1900, some people taught themselves to play by watching the holes pass over the trackerbar.  But the only role player pianos played in the life histories of famous musicians I'm aware of, were as ways of preserving their playing at a higher quality than old sound-recording technology could achieve.  For example, George Gershwin produced a set of reproducing-piano rolls which preserve dynamics of playing as well as timing and sequence.  He was even known to go over the rolls after he had recorded them to touch up mistakes and otherwise enhance his own performances.

 

So are the piano teachers of the world doomed by the advent of Synthesia and its spawn of YouTube videos?  It doesn't look that way. 

 

For people who either can't afford piano lessons or aren't that committed to learning, but would like to do something musical with an inexpensive keyboard and a smartphone, Synthesia seems to be a good way to fool around with the basics.  In that sense, it's kind of like using a skateboard instead of walking.  It's fun, it will get you some places faster, but nobody skateboards from Nashville to Chicago.  And for people who simply can't imagine trying to learn to read musical notation but want to make music with a keyboard, it's a crutch that might enable you to do what you otherwise couldn't do at all.

 

Sources:  The Reddit thread from which I excerpted comments can be found at https://www.reddit.com/r/piano/comments/11j3u6o/i_dont_understand_how_people_learn_a_song_using/.  I also referred to the Wikipedia article on Synthesia.