Queen’s can’t afford to stay in denial about AI use

Image by: Julia Ludden

Academic integrity’s increasingly important in the age of ChatGPT.

A senate approved report from Jan. 29 found that only 1.34 per cent of students departed from academic integrity in the 2024-25 academic year. Interestingly, unauthorized content generation, including Artificial Intelligence (AI) use is classified as a breach of academic integrity, something 55 per cent of students have anonymously reported doing. The discrepancy between the senate report and self-reported data highlights the need for better regulation regarding AI use.

Any student walking through Stauffer library can attest to the overwhelming use of AI in the academic setting. ChatGPT’s presence on almost every computer screen makes the senate report genuinely laughable. The question of AI regulation isn’t an easy one, but it doesn’t absolve the university from taking a stance on the issue.

Using AI to generate answers, summarize information, and write up reports represents a dangerous offloading of critical thought. The culture of AI use at Queen’s has become so ingrained that even well-intentioned students feel they need to use AI to remain competitive with their classmates.

With this, students are losing the ability to read, write, and learn—all essential components of a university degree. AI use enters a dangerous spiral when teaching assistants begin to mark AI work using AI. At which point is a vicious cycle has begun, where no real work is being done.

To combat decreasing literacy rates and a growing loss of critical thinking, the University needs to defend academic integrity with a more structured policy.

AI can be utilized effectively in various disciplines, making a uniform policy unlikely. However, the current lazes faire approach isn’t going to be enough. Training students in how to use AI can be beneficial for synthesizing large amounts of data, or for their future in the workforce. However, training should be structured to elevate critical thought, not diminish it.

Professors are feeling lost trying to navigate the lack of regulation, and in many cases, are resorting to increasing in person assessments. However, the in-person test is not always the most accessible form of assessment, and the art of the essay writing is on the brink of extinction.

While AI regulation is complicated, the current approach is beyond optimistic, and the senate report is just plain delusional.

Queen’s can’t treat afford to treat AI use as a passing problem but must consider it as a new development that requires increased oversight. Without which the value of a university education is erased.

—Journal Editorial Board

Tags

Accademic Integrity, AI, ChatGPT, Senate

All final editorial decisions are made by the Editor(s) in Chief and/or the Managing Editor. Authors should not be contacted, targeted, or harassed under any circumstances. If you have any grievances with this article, please direct your comments to journal_editors@ams.queensu.ca.

Comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Skip to content