
New year, new artificial intelligence (AI), and new threats to academic integrity.
Cheating at higher education institutions is becoming more complex and harder to identify with free artificial intelligence (AI) manufacturing essays for students. In November, AI company OpenAI released their newest product, ChatGPT, that’s free, accessible online, and capable of creating original prose on any topic.
Google searches for the AI service skyrocketed during December 2022, with Kingston having the fourth-highest number of searches of all Canadian cities.
ChatGPT poses a huge risk to academic integrity. Since AI isn’t plagiarizing other people’s work and students can’t write every essay under supervision, the use of ChatGPT—and similar AI technologies—is difficult to prevent.
As an initial precaution, professors at Queen’s are employing AI writing detectors to flag student submissions written by AI technologies, but they’re far from perfect. Experts suggest universities should focus on changing the structure of their assessments to be evidence-based or require students to produce work orally to prevent cheating via AI.
Students should prepare for the re-emergence of class participation grades, in-person exams, and presentations, as professors wait for software developers to create reliable AI detection software that will allow classes to return to their regular programming.
In response to ChatGPT’s threat to academic integrity, OpenAI revealed it’s working on a watermark for text generated by ChatGPT so it can be easily detectable in educational assessments. Instead of catching plagiarized papers, OpenAI’s attempt to regulate ChatGPT would keep students honest in citing their sources.
The software isn’t all doom and gloom; it’s a great way to help students generate ideas or access a general overview of a topic. The problem is that its usefulness doesn’t stop there.
In the meantime, higher education institutions must innovate their academic integrity policies.
Maintaining academic integrity is important in any higher education institution; the policies allow ideas to be exchanged freely and authors to receive credit for their work.
During the pandemic era, universities bent over backwards to adapt academic integrity policies to the online learning context, phasing out the traditional in-person exam and replacing it with take-home assessments and final essays.
Online proctoring services such as Examity became commonplace as students familiarized themselves with room scans, mid-exam bathroom breaks, and showing their screens in mirrors.
Fortunately, the AI that’s currently available can’t do it all. ChatGPT is poor at citing sources and generally avoids taking a stance on topics.
The bad news? Technology is moving faster than university policies can keep up with.
AI is constantly evolving, with new iterations getting smarter, and being coded to address the current flaws in the system.
What Queen’s will do in response to the prominence of ChatGPT remains to be seen, but as AI’s capabilities evolve, delivery of a university education will be forced to evolve, too.
Sophia is a fourth-year psychology student and one of The Journal’s Assistant News Editors.
Tags
academic integrity, AI, artificial intelligence, ChatGPT, Student life, Technology
All final editorial decisions are made by the Editor(s)-in-Chief and/or the Managing Editor. Authors should not be contacted, targeted, or harassed under any circumstances. If you have any grievances with this article, please direct your comments to journal_editors@ams.queensu.ca.