Monday, August 21, 2023

AP Faces AI

 

The Associated Press, one of the most respected news organizations worldwide with a history dating to 1846, has taken on the issue of how its writers and editors should deal with generative AI, represented by chatbots such as ChatGPT.  Since its release late last year, ChatGPT has been used by everyone from elementary-school children to Ph. D. scientists, and most observers would agree that the results reach a new level of sophistication, human-like qualities, and accuracy—most of the time. 

 

Other forms of generative AI can produce audio segments, photos, and videos that look genuine but are in reality fabrications that never happened.  The AP has now taken a clear stance that such AI products should not be used unless they are clearly labeled as fabrications.  In this the AP is following the example of Wired, which flatly forbids any AI-produced content in their site unless the story is explicitly about AI and is used in examples. 

 

An Associated Press story by David Bauder quoted Amanda Barrett, AP's VP of news standards and inclusion, as saying, "Our goal is to give people a good way to understand how we can do a little experimentation but also be safe."  Editors can use AI to come up with headlines or interview questions, but the final product presented to readers must be "vetted carefully."  In other words, just because ChatGPT says something doesn't mean it's so. 

 

Such rules leave a fair amount of wiggle room.  For example, I can imagine a lazy reporter assigned to look into an obscure topic such as ball lightning, who might start by asking ChatGPT to write two or three paragraphs summarizing the subject.  The reporter would be obliged to check any alleged facts that ChatGPT comes up with, but that is a lot easier than looking for examples and writing the summary yourself. 

 

If the question comes up as to how much of a story was written by AI and how much by the human reporter, it won't be easy to answer.  When two people collaborate on a joint work that is repeatedly revised and re-edited, sometimes they lose track of whose sentence was whose, and the same will be true of AI.  Despite the AP's intention to keep AI in its place—namely, as simply another tool reporters can use—I suspect more and more content production is going to involve AI at some point. 

 

But the intent is clear:  when a reporter puts a byline on a story, the reporter is taking ultimate responsibility for every word.  And that is as it should be.  Nobody likes to get personalized letters from entities such as "The Google Team" or "Your friends at Acme Collection Agency."

And so the AP's insistence that however reporters put together a story, they must stand behind it instead of blaming AI for mistakes is only reasonable.

 

Still, we may be farther down the road of AI involvement in news reporting than you think.  In the same article describing AP's policies on AI, we learned that OpenAI, the developer of ChatGPT, has agreed to a deal with AP to "license" AP's archives going back to 1985.  Apparently this means granting OpenAI complete access to those archives, which may not be publicly available otherwise.  This has annoyed some content providers, who see it as an unfair use of their work without compensation.

 

The problem here is that to most people, often including the developers of generative AI themselves, exactly what the AI systems do with the trillions of bytes of data they scrape from various Internet sources is not clear, except that without those sources, the AI system would be as useful as mammary glands on a boar hog. 

 

Back in the olden days of plain plagiarism, if Author A wrote a certain book and Author B came up with a different book in which several verbatim pages of text from Author A's book appeared without attribution, you had an open-and-shut case of plagiarism. 

 

But if we allow an AI system to look at work created by humans, and then ask it to come up with something similar, and the work it produces doesn't have significant word-for-word copying of the original works, how is that any different from Thomas Mann reading the Bible and then writing Joseph in Egypt?  Mann was a human being, of course, but the process itself is in question here.  Clearly there are issues that our legal and ethical systems are going to have to come to grips with, and we are only beginning to do so.

 

The NewsGuild, a 26,000-member union of news reporters that includes many staff members of AP, has expressed concern that AI may replace human reporters in ways that will work to the detriment of the whole industry, not to mention the reporters whose jobs will disappear.  Nobody has been talking about walkouts, and in general the legacy media such as newspapers are in such parlous financial straits that organizations like the NewsGuild realize you can't get blood out of a turnip.  But they are right to worry that many outlets with less of a reputation to uphold than AP will grab onto ChatGPT and use it instead of live human beings to produce stuff that is good enough to attract clicks.

 

The AP itself is a non-profit cooperative supported by member news organizations, and so can take a somewhat above-the-fray view of things compared to an outfit that has to make a profit.  They are to be praised for addressing the public's concerns about AI-generated news, and I hope they can make their rules stick about how AI should and should not be used.  It should be easy enough to avoid blatant howlers of the type that ChatGPT seems prone to come up with—things like an entire legal brief full of citations that turn out to be imaginary.  But what concerns me is the more subtle effects of relying on ChatGPT for drafts, for instance.  All its written productions I have seen have had exemplary grammar, which is more than I can say for some news items written by humans.  When you take a hundred encyclopedias' worth of verbiage and put it in a digital blender, the words that come out may be arranged in correct grammar.  But they will lack that ineffable something that goes under the name of style.  And we may miss that more than we think. 

 

Sources:  David Bauder's article on AP's rules regarding AI can be found at https://apnews.com/article/artificial-intelligence-guidelines-ap-news-532b417395df6a9e2aed57fd63ad416a.  I also referred to related articles at https://www.niemanlab.org/2023/07/writing-guidelines-for-the-role-of-ai-in-your-newsroom-here-are-some-er-guidelines-for-that/ and https://apnews.com/article/openai-chatgpt-associated-press-ap-f86f84c5bcc2f3b98074b38521f5f75a, and the Wikipedia article on NewsGuild.  As always, no AI tool was used in the writing of this article, you can rest assured. 

No comments:

Post a Comment