Current Events

ChatGPT: Friend or Foe?

ChatGPT has made waves over the last few weeks because it’s the most high-performing AI-powered chatbot yet created. Able to produce natural, human-sounding prose, it’s raised alarms in the academic world because students can produce AI-generated papers that are so convincing they can potentially elude plagiarism detectors. Considered a large language model AI, ChatGPT is trained using a huge data set of human interactions, and the training involves actual humans providing reinforcement with the goal of developing a bot capable of challenging incorrect assumptions and admitting mistakes. If asked, it will tell you it can understand context and intent.

person reaching out to a robot
Tara Winstead via Pexels.com

Some of the potential concerns people have raised include ethical problems in AI-generated “advice,” the fact that ChatGPT can produce realistic sounding but incorrect information, and the ever-present reality that machines are created and programmed by particular humans with particular values and ambitions. All these concerns seem important. Here, though, I just want to focus on the possibility of cheating or plagiarizing in academia, where I’ve taught writing and literature.

Traditional plagiarism means copying the work of another and passing it off as your own. Turning in an AI-generated paper seems like a new variant of academic dishonesty. If it’s really possible for a chatbot to write a paper that can be passed off as the student’s work, it will create serious legitimacy problems for student grades — assuming it’s the weaker students who will delegate the difficult work of writing to an AI.

But is it really possible? The few times I had students who genuinely plagiarized, it wasn’t the plagiarism detection software that discovered it; I did. There were enough differences between the student’s typical writing voice, and the language and style of the submission, that it could be identified. The plagiarism software was useful mainly as a teaching tool. In most cases, if the tool highlighted passages of an essay because they resembled material in its database of papers and articles, I used it as an opportunity to sit down with the student and discuss how to integrate and document source material more carefully. I could tell this was the issue, because there were indications in the paper that the student was using sources.

The true plagiarists were different. They gave no indication that the ideas and expression were not their own. One student actually copied verbatim from Wikipedia, and when confronted with the passages that had been inserted into her essay, she swore they were her own. Another student purchased a paper online, easily discovered through a Google search of the first few sentences, and turned it in. In these cases the speech didn’t match the students’ typical style of expression.

Andrew Neel via Unsplash

What I conclude from this is that stolen writing is recognizable if we have some knowledge of our students. In a huge lecture course where a teacher’s only personal interaction with students is a term paper, ChatGPT would be a threat. (I would argue that such a course format is a threat to learning in general — too large, too passive, too impersonal. That’s why many such courses include a smaller discussion component, where students meet in a smaller group to discuss material and ask questions with a teaching assistant.) But in smaller classes where teachers are reviewing short homework assignments, holding discussions and hearing how students talk, and assigning multiple writing assignments, I think ChatGPT would have a much harder time being used as a substitute for authentic student-generated work.

If I’m wrong about this, and the chatbot can adopt a student’s particular style and produce a paper, this could be combatted by another common practice: collecting process work along the way to a final draft. A chatbot can produce a final essay, but can it also generate the prewriting, the mind map, the first draft, etc.? This sounds like a lot of writing for a teacher to look at — and so it is. That’s why smaller class sizes are important. But smaller classes are important anyway, aren’t they? People learn more effectively in smaller, more interactive settings.

The best idea for limiting the inroads AI can make in the classroom is to make sure education remains fundamentally a human enterprise. This isn’t new… yet it is under threat. The massive push for more STEM, for viewing learning as a matter of accumulating data, and for the elevation of technical pursuits and “science” (really something more like “scientism,” as the Covid era has taught us) over the humanities — all of this seems to operate from a view of the mind as a machine rather than the morally sensitive, marvelously creative, largely mysterious, deeply consequential faculty it is.

I support using technology in education, but I believe the teacher-student relationship is at the heart of learning. The existence of ChatGPT poses some real challenges, but it might just possibly prod us to shore up some weaknesses in the educational system.

For further exploration: