The Web and computer technology have revolutionized the way students research and write papers. Unfortunately, these technologies have also made it vastly easier to plagiarize material: that is, to lift verbatim chunks of text from published work and pass it off as your own original creation. In response, many universities have promoted the use of commercial plagiarism-detection software, marketed under names such as Turnitin and MyDropBox. Still more unfortunately, in a systematic test of how effective these programs are in detecting blatant, wholesale plagiarism, the software bombed.
Why is plagiarism perceived as getting worse than it used to be? One factor is the physical ease of plagiarism nowadays. Back in the Dark Ages when I did my undergraduate work, it was not quite the quill-pen-by-kerosene-lamp era, but if had ever I decided to plagiarize something, it would have taken a good amount of effort: hauling books from the library, photocopying journal papers, dragging them to my room, and typing them into my paper letter by letter with a manual typewriter. With all that physical work and dead time involved, copying a few paragraphs with the intent of cheating wasn’t much easier than simply thinking up something on your own. The physical labor was the same.
Fast-forward to 2010: there’s Microsoft Word, there’s Google, and if you’re under 22 or so these things have been there for at least half your life. The “copy” and “paste” commands are vastly easier than hunting and pecking out your own words. And you suspect that a good bit of everything out on the Web was copied and pasted from somewhere else anyway. So what is the big deal some professors make about this plagiarism thing? The big deal is this: it’s wrong, because it constitutes theft of another person’s ideas, and fraud in that you give the false impression that you wrote it yourself.
In engineering, essays and library-research reports make up only a small part of what students turn in, so I do not face the mountains of papers that instructors in English or philosophy have to wade through every semester. But with plagiarism being so easy, I do not blame them for resorting to an alleged solution: the use of plagiarism-detection software. Supposedly, this software goes out and compares the work under examination with web-accessible material and if it finds a match, it flags the work with a color code ranging from yellow to red. Work that passes muster gets a green.
In a recent paper in IEEE Technology and Society Magazine, Rebecca Fiedler and Cem Kaner report their tests of how well two popular brands of plagiarism-detection software actually work on papers that were copied word-for-word from academic journals. The journals themselves were not listed in the article, but appear to be the usual type of research journal which requires payment (either from an individual or a library) for online access. There is the key, I think, to why the software failed almost completely to disclose that the entire submission was copied wholesale, in twenty-four trials of different papers. If I interpret their data correctly, only one of the two brands tested was able to figure this out, and even then it was in only in two of the twenty-four cases. Fiedler and Kaner conclude that professors who rely exclusively on such software for catching plagiarism are living with a false sense of security, at least where journal-paper plagiarism is concerned.
I think the results might have been considerably better for the software if the authors had chosen to submit material that is openly accessible on the Web, rather than publications that are sitting behind fee-for-service walls that require downloading particular papers. In my limited experience with doing my own plagiarism detection, I was able simply to Google a suspiciously well-written passage out of an otherwise almost incomprehensible essay, and located the university lab’s website where the writer had found the material he plagiarized. And I didn’t need the help of any detection software to do that.
As difficult as it may seem, the best safeguard against plagiarism (other than honesty on the part of students, which is always encouraged) is the experience of instructors who become familiar with the kind of material that students typically turn in, and even with passages from well-known sources which might be plagiarized. No general-purpose software could approach the sophistication of the individual instructor who deals with this particular class of students about a particular topic.
Of course, if we’re talking about a U. S. History class with 400 students, the personal touch is hard to achieve. Especially at the lower levels, books are more likely to be plagiarized from than research papers, and as Google puts pieces of more and more copyrighted books on the Web, plagiarism detection software will probably take advantage of that to catch more students who try to steal material. It’s like any other form of countermeasure: the easy cheats are easily caught, but the hard-working cheats who go find stuff from harder-to-access places are harder to catch. But it’s not impossible, and one hopes that by the time students get to be seniors, they have adopted enough of their chosen discipline’s professionalism to leave their early cheating ways behind. Sounds like a country-western song. . . .
If any students happen to be reading this, please do not take it as an encouragement to plagiarize, even from obscure sources. The fact that your instructors’ cheating-detection software doesn’t work as well as it should is no reason to take advantage of the situation. Anybody reading a blog on engineering ethics isn’t likely to be thinking about how to plagiarize more effectively, anyway—unless they have to write a paper on engineering ethics. In that case, leave this blog alone!
Sources: The article “Plagiarism Detection Services: How Well Do They Actually Perform?” by Rebecca Fiedler and Cem Kaner appeared in the Winter 2010 (Vol. 28, no. 4) issue of IEEE Technology and Society Magazine, pp. 37-43.