Toronto Metropolitan University's Independent Student Newspaper Since 1967

Someone sitting at a desk with their computer out but there is a ChatGPT logo over their face.
(CHARLOTTE LIGTENBERG/THE EYEOPENER)
All Campus News News

Increased academic misconduct cases attributed to AI use

By Landon Randfield

The Toronto Metropolitan Students’ Union (TMSU)’s Student Issues and Advocacy Coordinator (SIAC), Hector Flores, reported at the union’s December Semi-Annual General Meeting (SAGM) that his office has seen an increase in students’ academic misconduct consultations between May and December 2025, due to alleged artificial intelligence (AI) usage.

In an emailed statement to The Eyeopener, Flores said the most common cases in his office, “involve allegations that a student used generative AI to produce part or all of a written assignment without permission from the instructor.”

Flores reported at the SAGM that 30 per cent, or 120 of approximately 400 student consultations with his office between May and December 2025, were about academic misconduct. A longer processing time for academic misconduct cases was also noted.

In an email to The Eye, Toronto Metropolitan University’s (TMU) Academic Integrity Office (AIO) confirmed they have seen an increase in the number of academic misconduct cases arising from “unauthorized/irresponsible use of generative AI.”

The annual report from the Office of the Ombudsperson at TMU also stated that some instructors have not followed the university’s guidelines and instead penalized students without bringing their suspicions to the AIO.

 Janice Neil is an associate professor of journalism at TMU who sits on the Designated Decision Makers Council, a group of trained faculty members who investigate academic misconduct under Policy 60: Academic Integrity.  

“Some instructors will say ‘absolutely no AI use at all’…and others will say, ‘you’re allowed to use AI for brainstorming’…there is a wide range of permissibility across the university, and I don’t expect that that is going to change,” said Neil.

Tara Aiello, a fourth-year arts and contemporary studies student, was accused of using AI to write an essay for a summer 2025 course.

Aiello said her essay was accused of having language and ideas that were too complex, as well as a source her professor claimed was an AI hallucination.

However, Aiello maintains she never used AI, she cited her sources and said her use of complex language was her own and shouldn’t be automatically assumed to be AI.

Her academic misconduct case has not been resolved as of March 16.

Nimra Mohiuddin, a second-year sociology student, said she uses AI as an aid for research and finding information. 

Mohiuddin highlighted the lack of clarity from professors when it comes to understanding when AI use is acceptable. 

“I think they need to make the guidelines clear, because I do think AI is a really good tool in terms of finding information…especially when you might not understand an assignment fully,” she said. 

Neil also highlighted the inaccuracy of AI detection tools.

“The experts that are at our university, including the Academic Integrity Office, say that the detection tools [give] false positives and false negatives [and] that they are not an effective tool,” she said. 

Flores also said he agrees that AI and similar detection tools have “questionable” reliability, and many instructors have not used them, even before AI.

“For example, Turnitin is a well-known similarity detection tool. Whenever the similarity index is high, the instructor reviews the work and determines whether there may be plagiarism. The Turnitin results only raise a red flag that the professor may choose to further investigate,” he said in the email to The Eye.

Matthew Mongillo, a first-year business student at TMU, said he thinks students should be honest with their work, but understands why some students turn to AI. 

“There [are] a lot of people who have homework assignments…and it’ll be completely separate from what a lecture is…and then it becomes hard for the student to even know what’s going on,” he said. 

On Reddit, many TMU students say they have been incorrectly flagged for academic misconduct or AI use in their course work.

“Policy 60 clearly states that submitting work created in whole or in part by artificial intelligence tools, unless expressly permitted by the faculty/clinical faculty/contract lecturer, is academic misconduct and falls under the category of ‘misrepresentation of personal identity or performance,” TMU’s AIO said in the email to The Eye

WHAT'S HAPPENING ON CAMPUS?

Sign up for our newsletter

We don’t spam! Read our privacy policy for more info.

Leave a Reply