Everyone's using AI, no one’s getting caught
Students using artificial intelligence for assessments say ‘catch me if you can’ to universities struggling to keep up.
A Massive Instagram poll revealed 41% of respondents had used AI for an assessment this year, but only five of them were caught.
Nearly a year after ChatGPT’s launch in November 2022, the AI platform has integrated itself seamlessly into many student lives— and assessments.
Massey’s policy on AI use in assessments states “artificial intelligence tools and other third-party assistance may not be used to generate summative assessment tasks”.
Despite this, AI use is rampant among Massey students, said Sam* who spoke to Massive anonymously.
The third-year media studies major used ChatGPT for 70% of his assignments this year, saving him so much time.
Sam* said he reads over each assignment multiple times before submitting, questioning, “is this something I would personally think of?” to ensure he isn’t caught.
“I might not be thinking for myself as much, but it’s a lot faster”.
Sam* used ChatGPT to brainstorm ideas, summarise information, paraphrase his work, and answer assignment questions, but never directly copy-pasted what the AI spat out.
Not all of this was considered illegal, as Massey’s AI policy allowed AI use in the formative process of information gathering, listing examples such as “developing initial ideas”.
The illegality arises when AI generated work is “uncritically submitted as the students’ own work” unless this has been explicitly allowed in the assignment instructions.
A Massey University spokesperson said, “Remember, the reason we ask you to submit assessments is because we want to know what you know… If we want to know what AI can tell us about a topic, we’ll ask it ourselves.”
The university’s Student Disciplinary Regulations listed example penalties for student misconduct as a suspension or up to $500 in fines, while serious misconduct could result in expulsion or up to $5,000 in fines.
The spokesperson said, “Students who are repeatedly found to have used AI are also likely to receive more serious penalties.”
They said there had been zero cases of serious misconduct related to AI in 2023, however, there have been 70 breaches of academic integrity categorized as “students submitting work that does not represent their own”.
Although rare, the spokesperson said Massey had the power “to revoke a qualification after graduation if a student is found to have committed serious academic misconduct”.
They said Massey spots AI use through the Turnitin detection service, but primarily “uses human judgement”, looking at file metadata, fake content, and fact checking.
In most cases, students are given a chance to prove their work is not AI generated through assignment notes and drafts, said the spokesperson.