Artificial Intelligence may have a place in academia

Senate reviewed academic integrity policy to include generative AI

Image by: Herbert Wang
The protest took place in cities across Canada.

Since ChatGPT was launched in 2022, the use of artificial intelligence (AI) at Queen’s has engendered concern amongst staff and students.

On Oct. 5, Queen’s Senate approved revisions to the Academic Integrity Procedures brought forth by the Academic Integrity Subcommittee of the Senate Committee on Academic Development and Procedures.

The revisions included a new type of departure from academic integrity called unauthorized content generation, specifically targeting students’ use of generative AI. Other preexisting departures, such as plagiarism, were revised to include the use of technological assistance.

Vice-Provost (Teaching and Learning) Gavan Watson told The Journal the revised definition was only one element of Queen’s efforts towards engaging with AI and the question of academic integrity.

In recognizing ChatGPT may be a helpful study tool for students, there’s a question of how students and instructors can effectively communicate appropriate use of the technology.

“As long as you’re thoughtful and using your critical thinking skills, [ChatGPT] provides you with great opportunities for practicing questions that might be on a midterm or final exam. In fact, that’s all about improving student learning. There’s lots of ways that these tools can be used and really, faculty members and instructors are at that early point of figuring out how best to use them,” Watson said in an interview with The Journal.

Like other generative AIs, ChatGPT generates humanoid responses to users’ prompts. The OpenAI product has various uses for academics, including answering questions when studying, and producing texts, such as essays or reports.

“There are broadly two levels to consider—what’s possible within the classroom around what’s new, and what our obligations as educators are to consider the thoughtful use of these tools, and then on the other hand what it means for integrity and assessment programs,” Watson said.

There seems to be an upsurge of generative AI use, which has sparked discussions surrounding the academic integrity around AI and how ChatGPT can be used ethically in classrooms, Watson explained.

In a February Academic Integrity Subcommittee meeting, students voiced a desire for more transparency regarding the investigation process for departures from academic integrity. In accordance with revised academic integrity procedures, standardized forms and templates detailing what different stages of a review will be are available for students.

In February, Queen’s former Vice-Provost (Teaching and Learning) John Pierce released a general statement on ChatGPT and generative AI.

The statement outlines Queen’s current approach for staff to regulate classroom generative AI use. The statement explains Queen’s wouldn’t ban the use of generative AI in classrooms, but instructors would be able to use the technology at their discretion.

It further clarified that inappropriate use of generative AI would be considered a departure from academic integrity.

While there have been discussions amongst administrators and staff about specific AI guidelines for professors to follow, Queen’s defers to its instructors and faculty members to decide how generative AI will be used in classrooms.

“We’ve provided some guidelines and example language that could be included in syllabus and told faculty members they can engage in meaningful conversations with students around the use of these tools so there’s clarity on the part of both students and instructors,” Watson said.

“From a teaching and learning perspective, administration must recognize potential generative AI has in evolving education, specifically skills and competencies graduates may need moving forward, while ensuring that these tools don’t threaten the integrity of academic assessments,” he said.

In incorporating ChatGPT into academia, Watson cautioned users to understand how the AI is built in order to recognize information biases and inaccuracies.

“The biases we see in the online world are often reproduced in the way that answers are generated. If you say, show me an image of a professor, what they often deliver are people who look like me—gray hair, white males—and that’s not what all our faculty members look like, it’s indicative of bias,” Watson said.

The penalties for departures from academic integrity remain unchanged. Each situation will differ, and the consequence will depend on the context, Watson explained. Potential sanctions range from oral and written warnings to failing grades to temporary or permanent expulsion.

“There are often cases where education is more important than necessarily delivering a consequence that’s punitive,” Watson said.

There’s currently no available institutional-level data on the number of students under review for infractions related to use of generative AI. This year, Queen’s University Library and the Centre for Teaching and Learning are hosting a four-part panel series on ethical considerations for generative AI in Academic Research.

“[These panels] are an example of the kind of conversations we need to be having to explore what’s appropriate and inappropriate as far as the use of these tools are concerned,” Watson said.

Tags

academic integrity, AI, artificial intelligence, ChatGPT, Teaching

All final editorial decisions are made by the Editor(s)-in-Chief and/or the Managing Editor. Authors should not be contacted, targeted, or harassed under any circumstances. If you have any grievances with this article, please direct your comments to journal_editors@ams.queensu.ca.

Leave a Reply

Your email address will not be published. Required fields are marked *

Queen's Journal


© All rights reserved.

Back to Top
Skip to content