The Effortless Essay: How AI has Forced McGill to Rethink Plagiarism

Photo Credits: Letus Color

Plagiarism used to require considerable effort before the rise of AI. Students would cut corners and scour the internet for old essays, trade assignments with friends, or scroll through questionable academic forums at 3 a.m. Most of the time, the hunt took longer and produced far worse results than simply taking the time to write the assignment. However, in modern times, with AI tools capable of producing well-structured essays in seconds, academic integrity has undergone a significant shift. Like most universities, McGill has had to determine what constitutes plagiarism. Until this semester, the McGill Senate Code of Student Conduct didn’t mention AI at all; however, in mid-November 2025, the Senate approved a new amendment that formally addresses concerns around AI use and plagiarism. Previous drafts that preceded the final amendment included the possibility of using AI-detection software to penalize students. These tools have repeatedly proven to be inaccurate and can even flag fully human-written text as having issues. This would have posed huge problems, considering AI has a distinct writing style that can flag those with a more analytical style as being AI-generated. The reality of AI detectors is that none can be 100% accurate, legitimate, or fair, and the system would be stacked against students.

The final amendment, implemented on November 12th, does two important things: 1) It expectably states that students cannot use generative AI for assessments unless their instructor permits them to do so, and 2) it states that AI-detection software alone cannot be used as evidence to accuse a student of plagiarism. In other words, a Turnitin AI score or a ChatGPT-probability reading can’t be used to build a disciplinary case, where somebody can be put on probation or expelled. Now, because McGill has chosen not to establish a university-wide AI policy,  rules and regulations will vary from class to class.

This change has sparked very different conversations because of how its effects vary for each person and circumstance. In an interview with The Bull & Bear, Professor Diane Dechief, the Director of the undergraduate Arts and Science faculty and writing-focused course instructor in the Faculty of Science, says she supports the amendment, especially the part eliminating AI-detector evidence: “It would be really lousy for someone to be disciplined based on something that wasn’t necessarily true”. Still, she acknowledges that the lack of a standard can be confusing for students. Professor Dechief has already begun adjusting her own syllabi, adding clear checklists where students indicate how they used AI and building assignments that emphasize collaboration, reflection, and decision-making rather than just polished final writing. She doesn’t see AI as universally harmful, but she believes students need awareness about how it affects their thinking and academic development.

 In another interview, ​​Professor Paul Yachnin, a professor in the Department of English, showed a different approach to AI use. As soon as AI became available, he intentionally introduced it into his classrooms to be used as a tool to further challenge students’ thinking and writing. In one class exercise, Yachnin had students write on paper what they believed Hamlet would do to avenge his father. Then, in groups, they asked various chatbots the same question and compared the results. He described how AI was only able to recommend modern, bureaucratic strategies, but couldn’t delve further into the human complexity that the students discussed among themselves. Yachnin also teaches his students about the psychological risks of AI by assigning readings such as  “The AI Mirror” by Shannon Vallor, which draws on the myth of Narcissus to warn that AI can become a seductive reflection of ourselves. His evaluation methods have also changed, now relying primarily on oral exams, in-class writing, journals, and presentations.  For him, the goal isn’t to ban AI, but to keep students firmly rooted in human conversation and interpretation.

From the student side, things feel pretty mixed. Some students have noticed that the professors who actually use AI in their teaching are the ones whose classes feel the most fair and genuinely help students learn more efficiently. Unfortunately, in courses where AI isn’t discussed at all, it creates a tense dynamic where students end up policing each other. Most students aren’t asking for strict bans, but instead for consistency so they may worry more about their own work rather than each other’s. While this amendment answers some questions, it certainly does not cover all of them. Nevertheless, students and professors must work together to determine the best way to utilize AI, as the rules inevitably continue to evolve with further advancements.



Leave a Reply

Your email address will not be published. Required fields are marked *