When Erin Meger, assistant professor in the School of Computing, watches students open their laptops, she no longer knows whether they’re searching for answers—or generating them.
Meger has lived through Wikipedia, online journals, and now Artificial Intelligence (AI). But the rise of ChatGPT, Google Gemini, and other AI chatbots feels different. Faster. Louder. Less avoidable.
“We’ve kind of had these technology increments—this is a pretty big one—but we’ve done this before,” Meger said.
Following the release of accessible AI-powered chatbots, the landscape of higher education has seen a dramatic shift. A 2025 survey by Queen’s AI Nexus revealed that at least 55 per cent of students at Queen’s are already using AI, and 42 per cent of professors don’t feel prepared to teach with it. However, the capacity of AI to quickly synthesize complex information and provide answers at the touch of a button provides a unique challenge to professors and for the future of critical thinking more broadly.
As of July 2024, the University administration has released a no broad ban on AI, leaving it up to professors to decide how they want to navigate the unprecedented challenge. While some faculties have released their own department-wide policy, it’s still primarily up to instructors to decide what parts of the policy they want to implement in their classrooms.
The lack of regulation can be confusing to some professors and liberating to others. Catherine Stinson, professor in the Philosophy department and School of Computing and Queen’s National Scholar in Philosophical Implications of Artificial Intelligence, recognizes these challenges.
“To some extent, how you deliver your course has to be up to professors because that’s part of our academic freedom,” Stinson said in an interview with The Journal. “But also, instructors are looking for guidance on how they should be using it [AI],” Stinson continued.
For many instructors, the rampant uptake in AI use presents a new challenge they aren’t sure how to navigate. “AI kind of crept up on me, so I felt a little helpless,” said Philosophy and Law Professor Christine Sypnowich in an interview with The Journal.
Meger, Stinson, Sypnowich, and many of their colleagues are choosing to respond to the challenges of AI in a variety of ways. Their approaches differ based on the ways they’ve seen AI utilized in the classroom, and personal beliefs on its efficacy, ethical implications, and the future of AI in academia and the workplace.
Implementing new models of assessment and regulation
Many professors shared that the biggest change in their teaching has been an increase in in-person assessments. This follows a larger trend across Canada, where professors are opting for more in-person assessments.
After explaining how she tried to make her online assessments too complicated for AI, Sypnowich expressed that adjusting the grading scheme for the difficulty level became too complicated, and not worth sacrificing her students’ self-esteem by providing purposefully challenging questions. “I thought, oh this is silly, I can’t be trying to game AI in this way. We’ll just have to go for an in-class form of assessment.”
However, with an increase in in-person assessments comes challenges of accessibility. Receiving accommodations for learning disabilities and conditions that make test-taking challenging have extensive costs and institutional barriers. In Canada, ADHD and anxiety assessments can take up to a year to receive, one quarter of a student’s university degree.
“I’ve had to look at my courses and say, how do I create an environment that’s accessible, that’s useful, that’s respectful, that supports Indigenous ways of knowing.” Meger shared. “Because, as we all know, a [in-person] test isn’t an accessible modality for assessment.”
At Queen’s, Student Accessibility Services aren’t guaranteed
To combat these difficulties, Meger’s added shorter, frequent tests that students can miss for any reason, and reweigh to the next assessment if necessary.
Meger’s also required students to handwrite their assignments. “In the absolute worst-case scenario, where a student has gone to AI and asked AI to do it for them, the absolute minimum amount of work they have to do is write it out one time,” Meger said.
Even for students who aren’t struggling with barriers to accommodations, university can still be incredibly overwhelming. For Stinson, AI use is a sign that students are overwhelmed and unable to engage with the content.
“No matter what kind of cheating students are doing, whether it’s the old-fashioned kinds or the newfangled kind of cheating, the incentives for doing that are usually when students feel like they’re underprepared or overwhelmed and out of time,” Stinson said. For this reason, she scaffolds her assessments, building them up over the semester so students don’t feel overworked.
With ever-rising tuition costs, many students are required to get a job to support themselves through university. Employment adds time pressures that could impact course engagement. According to a study by the University of Toronto, university students are reporting burnout rates 50 to 60 per cent higher than ever before.
The rise of AI and in-person assessments has presented a unique challenge for professors, balancing accessibility with fair assessment. Without an overarching policy from the university, professors are left to navigate this tension on their own.
Perspectives on regulation
Regulating AI use presents a whole other challenge, as it can sometimes be difficult to prove, leaving instructors uncertain if it’s being used. “It creates a kind of culture of suspicion between teachers and students, which I think is a bit sad,” Sypnowich said.
This uncertainty has led some instructors to rethink their approach to policing AI use altogether. Meger shared how she used to have a zero-tolerance policy for AI use but found she often “talked a bigger game than what was procedurally possible.” While some students’ AI use might be obvious, others are much better at disguising it, and a rigorous academic investigation into every potential use of AI isn’t possible.
Some professors spoke about the need for a better overarching policy from the university. “I think Queen’s has been rather neglectful and just leaving us alone to make up the rules as we go along,” Sypnowich said, “people do feel they’re kind of floundering.”
While the lack of guidance from the administration has left some confused, other professors have embraced the uncertainty. “Let’s not overrule it, let’s also allow a lot of latitude for how instructors working with students can try to introduce this [AI] in our programs,” said James McLellan Chemical Engineering professor and academic director of the Queen’s Innovation Centre, in an interview with The Journal.
In addition to concerns about overregulation, some instructors wonder about who’s behind the creation of AI policy. “If I wanted to have a global policy, I wouldn’t want us instructors to figure it out. I’d want students to do it because you guys [students] have used it [AI] and know what it does and doesn’t do,” Meger said.
AI regulation can pose a complex problem. But for Stinson, it isn’t one that’s without an answer, or at least some semblance of one. “There are several committees right now, composed of a mix of faculty and administrators studying what AI related policies we should introduce, and one of those committees is focused on teaching and learning, so, they’re working on this problem right now,” Stinson said.
AI use and its consequences
The uptick in AI use has exposed the role of critical thinking in the university setting and potential complications arising from algorithmic bias and copyright violations. It’s with these complications in mind that professors have felt the need to implement new modalities of assessment.
The majority of the professors’ interviews recalled seeing a dramatic increase in work they consider to be a product of AI. “So far, no student has ever disclosed any use of AI in any of my classes, but obviously, some are using it,” Stinson said.
Erin Meger, explained how she used to order her assignments, culminating in a final challenging problem set. “But then in the fall of last year, ChatGPT got an update, and instead of getting trash, I got full, complete, and correct solutions for every single question on my problems,” Meger said. She explained how when students copy and paste from ChatGPT, the text has a different markdown language, and it’s obvious to her when the work is a product of AI.
Many professors are experiencing difficulties around AI use and inflated test scores in their classrooms. “There was an online quiz, which was pretty challenging […] and the marks were crazy high. So, it was obvious the students had help, and the most likely culprit was AI,” Sypnowich said.
It’s not only in at-home assessments that AI use is cropping up. “One place it’s starting to come up more and more, and we’re getting a bit of a sense is with things like discussion forums,” McLellan said.
“It’s obviously frustrating as an instructor when you need to spend time reading work that clearly wasn’t made by a human, and then you’re spending human effort reading something meaningless,” Stinson said. Together, these experiences indicate a growing challenge for instructors—distinguishing between student learning and AI-generated responses.
For most professors, AI represents a clear threat to critical thinking and problem-solving. “If students are summarizing readings using AI and then using AI to create their assignments, they aren’t getting much out of that,” Stinson said. “My biggest concern with AI tools is that if we’re not careful, it’ll short-circuit opportunities to practice and get feedback on thinking,” McLellan said.
Another overlooked concern with AI use is algorithmic bias. Stinson, whose work involves evaluating AI’s efficacy, expressed concerns about AI’s potential to perpetuate harmful biases. Many AI data sets are pulled from large historical records that can absorb prejudices and stereotypes.
McLellan is similarly apprehensive of AI’s tendency to misrepresent. “If you train them [AI] on data sets that’re 70 per cent white male, which is what has happened in the past. Big surprise that they do a poor job classifying women, women of colour, men of colour, etcetera,” McLellan said. “We’re getting into situations where irresponsible use of AI can cause harm.”
Some of the professors The Journal talked to have also expressed concern about the implications of AI on intellectual property. AI models learn from massive datasets scraped together from the internet, which includes copyrighted texts and images, without proper citations. “There’s a whole philosophical debate about […] the fact that AI is trained on publicly accessible information, and to what extent does that represent basically grabbing people’s creativity and ideas without proper recognition and sometimes compensation,” McLellan said.
“When you’re copying and pasting the assignment you’ve been given into Chat GPT, you’re violating the copyright of your instructor,” Stinson said. Any material from a professor’s syllabus or course material is subject to copyright and cannot be shared with AI without consent of the professor.
As AI continues to evolve, professors are forced to grapple with pragmatic challenges of assessment, and moral complications regarding data use and intellectual property.
The positives of AI and its future in the classroom
A particular use is in working with large data sets. “The thing large language models are really good at is taking in large amounts of data and summarizing it,” Stinson said.
“I think it can be a really useful tool for things like getting started on research,” McLellan said. This is one of the things he hopes the administration takes into consideration with future regulations; “making sure there are opportunities to constructively use it [AI] and, sure, absolutely making sure that the rules we set forth don’t limit it,” he said. By accelerating processes that can take researchers weeks or months, AI has the potential to free up time for deeper analysis and innovation.
In areas like medical research, AI shows immense promise. “There are just so many ways in which it’s helping humankind,” Sypnowich admitted. “In things like genomics and a lot of those areas, a lot of what you do you couldn’t do without AI,” said McLellan.
It’s hard to predict what the future of AI will look like, but the same is true for any technological innovation.
With any new technology, comes massive investment, and massive uptake, but this doesn’t always last. Corporate investment in AI is projected to exceed $500 billion in capital spending in 2026. For Stinson, this represents a boom-and-bust mentality that might not stick around. “Most of the companies aren’t actually making any money. They’re losing money right now. They’re being funded by venture capital, and so, that’s not going to go on forever; that’s going to have to shift.”
“I see this big push of AI less as a barrier to actual education and more of a barrier to assessment,” Meger said. This raises questions about universities rapidly reshaping teaching based on technology whose long-term role is still far from certain.
Questions of how to best assess students’ learning have always existed; AI and ChatGPT are just another element of the dialogue. “I don’t think we’ve necessarily ever been good at teaching critical thinking,” Meger said.
As AI continues to evolve, it’s reshaping how students do their work and how they’re assessed. With unresolved questions around regulation and the future of the technology, instructors are forced to consider how they can preserve the university as a space for independent and critical thought.
Tags
AI, AI policy, philosophy, School of Computing
All final editorial decisions are made by the Editor(s) in Chief and/or the Managing Editor. Authors should not be contacted, targeted, or harassed under any circumstances. If you have any grievances with this article, please direct your comments to journal_editors@ams.queensu.ca.