Throughout my degree, I’ve been surrounded by GenAI both as a student and an educator, and I hate it.
Our generation has the unique privilege of remembering a time before GenAI, while still being young enough to adapt to its rise.
As a student watching our institutions—and in particular, our universities—scramble to adopt GenAI tools to enhance learning, it’s becoming clear that nobody knows what’s going on; faculties are fragmented in their approach to GenAI.
It’s an arms race where both sides are cheating each other, and everyone knows it. The tipping point for me was what I call “The AI centipede.”
While taking labs, my classmates and I would notice that the lab manuals were written in part by AI. As such, many decided to upload the manual to their AI model to digest it and spit out the step-by-step procedure and the expected results. Effectively walking them through the lab. The report is then written using that very same model, where it’s submitted and graded by a TA (also using GenAI) to close the loop on the AI centipede.
Garbage in, Garbage out. Everyone wins here if nobody looks at it too hard. The issue lies in the fact that GenAI isn’t just making your work easier; it’s doing the work for you. Realistically, GenAI won’t be able to solve the complex issues that students will encounter post-graduation.
It’s not just students cutting corners either; it was an open secret that many professors during the PSAC 901 strike were using ChatGPT to rewrite their exams to be multiple choice without thoroughly checking them over.
That’s not to say that GenAI doesn’t have its positive uses to personalize learning and even enhance it. However, people follow the path of least resistance. While top performers will still try to learn, many people will fall to the new mean (which is worse than the pre-GPT average). I personally know students who are much better prompt engineers than any other kind of engineer.
I like to think of GenAI as that friend of yours who loves to fake it till they make it. Only, that they have (illegally) consumed the knowledge of the entire internet, so they’re quite good at it.
But that’s all GenAI is doing, faking it. A damning paper from Apple, published in June 2025 revealed that state-of-the-art GenAI models are just creating an “illusion of intelligence.” In other words, just predicting the next word that “sounds right,” according to the millions of examples it can cross-check against. However, when encountering a new problem, one it has never seen, it simply stops trying.
If you get nothing else from this article, please know this: GenAI doesn’t think or even reason, it simply mimics what thinking looks like. And its “thinking” is littered with Western biases.
To me, this is the most terrifying part about the large-scale adoption of GenAI. People are hailing GenAI as if it’s the AI revolution, but that’s just not true.
AI was built by the West, for the West. Existing measures of success in AI don’t reflect the global majority. In other words, we’re fooling ourselves by saying that the AI revolution is here. In reality, it’s only here for some of us.
When I worked as an educator for the past 8 months, that’s when I realized that our adoption of GenAI is severely misguided. Studies have shown that 61 per cent of non-native English speakers had their essays flagged by AI plagiarism checkers despite the essays being written by hand (and while the accuracy for native English speakers was nearly perfect).
Some professors embrace this uncertainty of students using GenAI to cheat by allowing their students to have free rein with GenAI. They figured, if we can’t be against them, we might as well be with them.
Some even allowed first-year engineering students to submit completely AI-generated programming assignments with impunity. Of course, when I was a TA for MREN 178 (Data Structures and Algorithms, a programming course), I had students severely lacking pre-requisite knowledge from the previous semester.
GenAI isn’t intelligent, unbiased, or explainable. Frankly, it’s dangerous for learning if not used intentionally. Later in the summer, when I was an instructor of a GenAI program for 13–15-year-olds, I had a true glimpse into the future of our education.
The learning loss of the pandemic, compounded with the rise of GenAI and TikTok, has left its mark on Gen Alpha. Kids are the most vulnerable to being over-reliant and misled by GenAI. Most of the students that I taught didn’t even realize that GenAI could hallucinate. Additionally, most of them aren’t able to distinguish between real content vs. deepfakes (this is a topic for a later discussion).
While familiarizing the next generation of students with AI tools is a benefit for them, elementary students can fall into worse pitfalls than post-secondary students. The critical thinking skills emphasized at that age are tossed out of the window as underfunded and overcrowded public schools struggle to adapt to the near-monthly updates to our AI models. Of course, we could educate students on AI ethics and safety, but limited professional development for teachers makes this difficult.
Guardrails and limited access to AI for kids are a viable solution, but still something that requires more thought from institutions that aren’t affiliated with Big Tech, specifically Microsoft, Google, and Amazon. It is in Big Tech’s favour to let GenAI be a loss leader for the younger generation. In my opinion, their endgame is to develop a reliance on these tools as these young students become young professionals. Google did this previously by selling Chromebooks to school boards at a discount, to later sell the data collected on those very same Chromebooks.
The writing is on the wall. We are moving towards rampant, unchecked AI expansion into the most vulnerable of institutions.
Ahnaaf is a third-year student mechatronics engineering.
Tags
AI, Education, Opinions, Research, Technology
All final editorial decisions are made by the Editor(s) in Chief and/or the Managing Editor. Authors should not be contacted, targeted, or harassed under any circumstances. If you have any grievances with this article, please direct your comments to journal_editors@ams.queensu.ca.
Kaniz Jolly
I totally agree with you. Great write up.