That's the question that Bard College professor of literature Hua Hsu asks in the current issue of The New Yorker. Anyone who went to college remembers having to write essay papers on humanities subjects such as art history, literature, or philosophy. Even before computers, the value of these essays was questionable. Ideally, the task of writing an essay to be graded by an expert in the field was to give students practice in analyzing a body of knowledge, taking a point of view, and expressing it with clarity and even style. The fact that few students achieved these ideals was beside the point, because, as Hsu says in his essay, "I have always had a vague sense that my students are learning something, even when it is hard to quantify."
The whole process of assigning and grading essay papers has recently been short-circuited by the widespread availability of large-language-model artificial intelligence (AI) systems such as ChatGPT. Curious to see whether students at large schools used AI any differently than the ones at his exclusive small liberal-arts college, Hsu spent some time with a couple of undergraduates at New York University, which has a total graduate-plus-undergraduate enrollment of over 60,000. They said things such as "Any type of writing in life, I use A. I." At the end of the semester, one of them spent less than an hour using AI to write two final papers for humanities classes, and estimated doing it the hard way might have taken eight hours or more. The grades he received on the papers were A-minus and B-plus.
If these students are representative of most undergraduates who are under time pressure to get the most done with the least amount of damage to their GPA in the subjects they are actually interested in, one can understand why they turn to resources such as ChatGPT to deal with courses that require a lot of writing. Professors have taken various tacks to deal with the issue, which has mostly flummoxed university administrations. Following an initial panic after ChatGPT was made publicly available in 2022, many universities have changed course and now run faculty-education courses that teach professors how to use ChatGPT more effectively in their research and teaching. A philosophy professor, Barry Lam, who teaches at the University of California Riverside, deals with it by telling his class on the first day, "If you're gonna just turn in a paper that's ChatGPT-generated, then I will grade all your work by ChatGPT and we can all go to the beach." Presumably his class isn't spending all their time at the beach yet, but Lam is pretty sure that a lot of his students use AI in writing their papers anyway.
What are the ethical challenges in this situation? It's not plagiarism pure and simple. As one professor pointed out, there are no original texts that are being plagiarized, or if there are, the plagiarizing is being done by the AI system that scraped the whole Internet for the information it comes up with. The closest non-technological analogy is the "paper mills" that students can pay to write a custom paper for them. This is universally regarded as cheating, because students are passing off another person's work (the paper mill's employee) as their own.
When Hsu asked his interviewees about the ethics of using AI heavily in writing graded essays, they characterized it as a victimless crime and an expedient to give them more time for other important tasks. If I had been there, I might have pointed out something that I tell my own students at the beginning of class when I tell them not to cheat on homework.
A STEM (science, technology, engineering, math) class is different than a humanities class, but just as vulnerable to the inroads of AI, as a huge amount of routine coding is now reportedly done with clever prompts to AI tools rather than writing the code directly. What I tell them is that if they evade doing the homework either by using AI to do it all or paying a homework service, they are cheating themselves. The point of doing the homework isn't to get a good grade; it is to give your mind practice in solving problems that (a) you will face without the help of anything or anybody when I give a paper exam in class, and (b) you may face in real life. Yes, in real life you will be able to use AI assistance. But how do you know it's not hallucinating? Hsu cites experts who say that the hallucination problem—AI saying things that aren't true—has not gone away, and may actually be getting worse, for reasons that are poorly understood. At some level, important work, whether it's humanities research papers to be published or bridges to be built, must pass through the evaluative minds of human beings before it goes straight to the public.
That begs the question of what work qualifies as important. It's obvious from reading the newspapers I read (a local paper printed on actual dead trees, and an electronic version of the Austin American-Statesman) that "important" doesn't seem to cover a lot of what passes for news in these documents. Just this morning, I read an article about parking fees, and the headline and "see page X" at the end both referred to an article on women's health, not the parking-fees article. And whenever I read the Austin paper on my tablet, whatever system they use to convert the actual typeset words into more readable type when you select the article oftenmakesthewordscomeoutlikethis. I have written the managing editor about this egregious problem, but to no avail.
The most serious problem I see in the way AI has taken over college essay writing is not the ethics of the situation per se, although that is bad enough. It is the general lowering of standards on the part of both originators and consumers of information. In Culture and Anarchy (1869) (which I have not read, by the way), British essayist Matthew Arnold argued that a vital aspect of education was to read "the best that has been thought or said" as a way of combating the degradation of culture that industrialization was bringing on. But if nobody except machines thinks or says those gems of culture after a certain point, we all might as well go to the beach. I actually went to the beach a few weeks ago, and it was fun, but I wouldn't want to live there.
Sources: Hua Hsu's "The End of the Essay" appeared on pp. 21-27 of the July 7 & 14, 2025 issue of The New Yorker. I referred to the website https://newlearningonline.com/new-learning/chapter-7/committed-knowledge-the-modern-past/matthew-arnold-on-learning-the-best-which-has-been-thought-and-said for the Arnold quote.
No comments:
Post a Comment